 Hey Doug. Hey Mark. Give me just a sec trying to organize my screen here. I don't know how I got all messed up. It was only quiet because somebody spoke, I understand. Yeah, I as I was clicking on the zoom link, I had this great fear that it's gonna ask me for a password. I forgot to check yesterday to make sure it was safe. Hello, David. How are you? Marcus here, I heard him. Clemens and let's see. Mike, you there? Yep. All right. All right, Tommy. Hello, Vladimir. I think that's Vladimir, right? Scott told me to hear the conflict today. Okay, cool. Yeah, there's a community meeting, so there's a lot of people. Oh, yeah, I forgot about that. I was gonna join too. I had another meeting that couldn't make it, but yeah. All right. Vladimir, are you there? What about Manuel? I guess he has a microphone. Wait. So do we need an SDK call today? Mark or Clemens? Or anybody else, I guess? No. No. Okay. I'd rather eat lunch. I don't think so. Yeah, okay. Much rather eat lunch. I see activity going on in the various SDKs. Hi, Doug. How are you? Yep, my class, yeah. But I'm not hearing anybody scream for a topic. I guess we check and see if Slinky has anything. He's probably on the candidate community meeting as well. Good point. Was anybody on that at all? I'm curious to know if it was interesting. I wasn't on it. Hey, Eric. Good morning. Hey, Ray. Hello. Good morning. Good afternoon. Yeah. Hello, usually covers it. Hey, Francesco. Hello. Hey, Lance. I'm going to give one more minute. Hi, Ryan. Hello. Manuel, are you there now? Manuel, Stein, are you there? Yes, I am. Hi. Oh, there you go. Excellent. And Vladimir, I think a T-S-U-R-D-I-L-O-V-I-C. I think that is Vladimir, right? No, that's Vladimir. T-I-A-J-M-I-R. I'm sorry, you're very hard to hear. Oh, it's the guy that's on the agenda for StereoList work with T-I-L-O-V-I-C. Oh, Timur, I'm sorry. Yeah, sorry. Timur, sorry. I'll change my name. I just have the handle now. Yeah. Sometimes I can remember these things. Other times I'm really, really bad with it. I apologize. Timur, yes. All right. It's three after one. I go ahead and get started. Oh, we've got a low attendance today. So let's see if we can make this quick then. Yeah, I still haven't finished my AI. I still have to do the repo structure. Okay. Community time. Is there anything from the community that people like to bring up? All right. Not hearing anything. Moving forward. We are still having discussions out in the TOC repo around what to do with serverless. Are we going to stay working group under app delivery or not? I did hear back from Liz and there seemed to be some concern about not creating a SIG just for serverless. And I told her basically in the end, we'll do whatever the TOC wants for the most part, but at the same time, we don't want to get into this rat hole of having endless discussions about how to find serverless when in the end, it technically doesn't matter. What really matters is the work that we produce, the projects and stuff like that. So we don't want to just, as I said, rat hole on how to define serverless right or to define it so narrowly that you're either going to have to redefine it later or continue to explain to people why our definition is so narrow and basically just don't want to just turn to a process hell kind of a thing. And she kind of understands. So we're still going back and forth on what to do here. I think most of the people seem to be okay with us just being a working group under SIG app delivery, even though it's not the best fit, at least it gets us out of this process hell. So let you guys know how it goes on. But as right now, that's the current status just going in circles. Let's see SDK. So this week we do have a call scheduled. However, I know Scott probably can't make it. Mark Clemens, they don't have any topics. Does anybody have any topics for the SDK call? Otherwise, we'll cancel this week. I added one or two topics. There we go. I'll be quicker. Okay. So maybe we'll make it. I think Scott is going to join, I think. Okay. Well, we'll see. Okay. We'll at least have the call and see if it's. Scott's not going to be able to make it. Well, let's see what the topics are when we get there. Hopefully this call itself will be quick anyway. So we'll see how it goes. Hey, Doug, did anyone have any comments or questions about this SIG update topic? Oh, I'm sorry. I forgot to ask. Yeah. Any questions? Does anybody on the call care, whether we're just a working group on their SIG app delivery or we're a full-fledged SIG? Well, this must have an effect on new projects that we want to create, right? If we are a SIG, we can kind of do the SIG review ourselves for projects we are designing and if we are just a working group, then SIG app delivery would be responsible for reviewing any new projects that come out of us, right? That is true. We will not have absolute control of our own destiny. But to be honest, I don't think that's a huge issue to me. But Mark, did you want to say something? Oh, I was going to comment that I think I'd rather talk with SIG app delivery about any new projects than having to go to the talk and getting buy-in at the talk level if we were a SIG serverless. Yeah, I don't know. We don't create new projects on a daily basis anyway, right? So, okay. Anyway, moving forward. Tirmur, is there anything you want to mention relative to updates for the workflow stuff? Yeah, thanks. We did have a version 0.1 released. And yesterday we did do a TOC presentation to SIG app delivery for sandbox approval. So, we're in a review state right now for that. Okay. Any questions? Well, I mean, regarding that, I mean, I did get a feeling a little bit being new here. And so, your guys' help would be much appreciated. I don't know much about SIG app delivery, but to me, they seem more focused on actual applications running on Kubernetes rather than specification type of work. So, my question is if we do end up falling under SIG app delivery, sorry, how they will, you know, what about the specification work and how they will be embraced within that group? I'm not really fully certain. Anybody have any comments on that? Okay. From my point of view, I don't think it impacts anything, right? Once you're a project, regardless of what SIG you're under, I don't think the SIG really has much influence over what the project itself does. So, to me, this entire SIG discussion is from my perspective more of a bureaucracy, pain in the butt more than anything else to be honest. So, I personally wouldn't worry about it. Okay, sounds good. But that's me. I cannot stand process for no purpose, and it's what this feels like to me. Anybody else have any different point of view? Okay. Not hearing any. All right. Any questions about the workflow from the rest of the group? Okay. Moving forward then. I know Mike, you said you had to leave early, so hopefully we can get to this first PR quickly. Is that that one might actually be ready to go or close to ready to go? And yours was just recently updated. So, Francesca, do you want to talk to this one? I know I made some comments last night, but honestly, all my comments seem relatively minor. But do you want to bring her up to speed on what this one's all about? Well, the thing that I still didn't understand what the group feels about is if we want to move this HP multi-part together with batching in order to suspect more than putting this inside HP protocol binding suspect. I mean, in my opinion, it should live in an order together with batching. It should live in an order suspect. Any comments, thoughts? You guys are awfully quiet. I mean, to me, I looked at this yesterday, and I think it looked good to me, just just giving my voice there. To me, the way that this aspect is specific to HP and that's what this part of the spec is meant for. So, I don't see why it should be separate. Yeah, it seemed like a nice fit to me. Clemens, do you want to say something? Your hand, you came off mute. Do we need this? My initial thought is do we need this because I just don't know how well and I know multi-part exists, but I just don't know how well that's supported in common HTTP frameworks because it's a fairly complicated thing to tease apart. And I know that there's, I mean, there's email is doing, it's using multi-part, etc., but I don't, so basically the HTTP frameworks that I'm aware of are having a hard time splitting apart multi-part messages. Like, there's no good support for getting multi-part, and receiving a multi-part message requested and breaking that apart into several entities. Anybody have thoughts or comments on that? Francesca, just to force the discussion a little. What was the biggest driver for you wanting to add this? My biggest driver is to have some way to send multiple events in the same envelope without being forced to have, without being forced first to have a finite length of the envelope because doing this way you can send a potential infinite multi-part envelope. And the second and most important was to avoid a full passing of a big chaser. Like, that's my biggest concern with batching. Then my question is what is driving, what requirement is driving that? Because you could just as well say that you're using the HTTP framing and that you're using pipelining, which works really well with HTTP2 and better, rather than using multi-part. Basically you start sending multiple HTTP requests in a row, rather than forcing it all into a single HTTP request. The problem is that having inside the same envelope can give the ability to the client, so to the standard receivers, to give a meaning to the various events. I mean, the user can decide if it can give some kind of semantic of the various events in the same envelope. The example, which is one of my drivers, is the function vocation with multiple events. Wait, so the events are actually belonging together? And that's the point. The spec doesn't state that. You can or you cannot give a semantic meaning. So let's back out. What is the semantic meaning? So do you have a semantic that says, here's a set of related events? I was going to try to develop a project, which actually sends multiple events into a single request, because a request was mapped to a single function vocation with multiple events. Are those events dependent on each other or are they independent? Yes, because it's a function vocation with multiple parameters. I mean, a function with just multiple parameters as input. I would argue that's not a batch. Or that is actually not a transmission that we ought to solve at this level, because you're sending one payload and that payload contains multiple events. And I think the point here is that it's exactly this one. I mean, I don't want to inspect a definition of the semantic of why there are multiple events in the same envelope. I just want to say how to map. But my point is, since you are doing a single transfer, effectively, because you're grouping those events, you're making a single message intentionally to transfer multiple entities inside of that message. The message concept is what we do with cloud events. So what you're not doing here is you're not sending multiple cloud events, but you're sending one cloud event that contains multiple sub-events effectively as payload. So I'm not sure. I'm actually convinced that this is not something that should be solved at the transport level, but that's something that should be solved as a composition inside of your event. Because your scenario doesn't seem to be similar or an alternative to batching. It is a way to express the body of the event differently. And the body of the event can be expressed as any arbitrary MIME type. And for that, you can already do multi-part if you wanted to. So if you create an event with the content, with the data content type, multi-part, you can already do what you want to do, but we don't have to go and manipulate the transfer mode for this. If you create an event and the event has my multi-part in it, as its data content type, you can go and put inside of that event, you can put anything that you like, including a list of events. Does that make any sense, Francesco? Well, how do I do a content type? I mean, let's say I want to transport a cloud event with a multi-part content type, a binary mode on HTTP. What happens then if I want to serialize this? I need to break apart? No, you don't. Because you are using binary content mode. In binary content mode, you label the content as my multi-part. That is your content type. And then you take any arbitrary composition of my multi-part and stuff that into the entity button. That is what that signer mode is for. Okay, so that means that in binary mode, then I will have to send different parts. And in every part, I need to encode the event, because the point is sending every parameter of the function is actually an event. Yes, but that's a choice that you are making. And that's fine. And I think that's a legitimate use of cloud events. But yeah, if you want to do it like this, if you want to go and create an event, if you have 10 parameters and every parameter is an event, then using my multi-part is a legitimate way of doing this. But the way how you would do this is you would send one event which carries my multi-part, which has then in each part of the multi-part transmission contains an event. But that's a composition you already have. What you are doing here is if you only make this work for HTTP, then this construct would not work on any of the other transports that we have. The model that we have is that all the constructs that we have work with all the defined transports. And if you really are, if your design calls for having events, multiple events sent in one group together using my multi-part, you can do this today by using my multi-part as the data content site on a cloud event. And that will then go and work with all the bindings that we have, from Kafka to MQTT to HTTP. And it doesn't special case for HTTP because I don't think we need to have a special case today. Well, but HTTP now has a way to send multiple events inside the same artwork with batching. I mean, there is already a special case now. Well, yes. So there is this, now actually it's not because the cloud events batching is defined, the batching mode is defined effectively also in conjunction with JSON. So it's a JSON, it's ultimately a function of the JSON coding. So I kind of feel like we're not necessarily in agreement, but I also don't want a rat hole on this call too much. Would it make sense for you guys to talk offline, if not through voice, at least through the issue? I'll go and make a comment on this on the issue. Okay. Francesco, you okay with that? Because I do think it's, I think this is an interesting conversation. I'd like to see how it plays out a little bit more before we try to form a formal decision. Okay. Okay. Okay, cool. Thank you guys. I appreciate it. Yeah, we'll love if somebody else also participates in this question. Yeah, because I just remembered I did have one question that I forgot to ask an issue about or ask a question about and I'll place that into the issue or PR as well. Okay. Thank you guys. Mike, you want to update us on your PR, the way you can leave in eight minutes? I, there were no new comments since in the past week, so I resolved open discussions and pushed a small change that fixes some minor wording. Okay. I suspect, well, did anybody get a chance to look at it since the bulk of the changes were made based upon the comments? The reason I'm asking is because I'm trying to figure out whether it's, I suspect it's too soon to formally vote to approve this or not just because I'm not sure people had a chance to review it. So I wanted to ask the question though formally. Go ahead, Ryan. I did take a look at it. I think there, there were a few comments I made that were, some of them were more thinking out loud than things I actually have conviction in just to see what people thought. There is one open discussion that I don't need, no needs to be solved specifically in this PR, which is whether there can be more than one producer of a given product or in type. But I, yeah, I mean the changes, the changes that were made look good to me. Okay. Thank you. I mean, at this stage in the, in its life cycle, since we're not really anywhere close to nailing things down, I personally like the idea of leaning more toward, is it more right than wrong? And if so, letting it, letting it in and we can tweak it through PRs. But at the same time, our usual process is if people want more time to review stuff before we approve, especially if changes were made relatively soon, then we give that time to people. So do people want more time to review or do they want to let it in and then work through issues and PRs to tweak it? There was, there were no major changes in the past week. Okay. Sorry, Doug, there's one other thing I wanted to point out as well, which is there are, I made a comment about this, but it seems like we're introducing new concepts in this spec and the other specs that might be interesting to the primary cloud event specs. So I'm just, I'm curious as to how people think we navigate that, whether, whether defined, whether it's useful to, if they're defined in these specs, like, should we, if we think that there's like a new header that should be introduced, where does that get defined, where does that get documented? I don't have an opinions, but I figured I would bring it up as a question. Anyone want to comment on that? I think if the discovery specification were to, were needing an attribute, then that's, I think that's an extension in the first place. There were no, no new attributes that would be needed to be added to the cloud event spec proper. Okay. I would have been surprised if there were, but yeah, I think, I, but I think that's the, that's, that's how we look at it is, if there are, I mean, if there are concepts that are new, then we should certainly have them in the primary, because I think the primary is to a living document that should be broader than the core cloud event spec. So I think definitions that we have here, middle subscription manager and discovery and all those things, I think the primer should be expanded and then point to the various different documents that we have. So that's something that we should certainly do. But in terms of, if, if, if an additional layered on feature, like the, like discovery or subscriptions or any of the other things we might still do, if that were to require additional attributes, I think those are by nature extensions. I don't think they necessarily, they really require that the, that the core spec gets, gets modified unless that were something that everybody must implement. Class your hands up. Yes. So I think it's extending the primers a good idea, because I'm honestly still a bit confused about the role, I mean, how the relations are between event provider, producer and so on. I don't know if that holds this PR or if we just can work on this over the coming weeks and months, but yeah, explaining this new model or the bit more on the primer would be a good idea. Okay. So class, I wasn't sure whether you were formally asking for a little more time to work on the possible confusion around the terms or are you okay letting it go in and then working on it through PRs? Perhaps we just need to let go in and work on it then together. Okay. All right. Any other questions or comments? In particular, if you have any concern with letting it in, please speak up because if not, I'm going to ask if it's okay to approve. Yeah. So the one change in terminology is this producer versus provider. I expanded it out to event provider. You can see there on line 66. Yeah. All right. Any last chance? Any questions, cons? Okay. Any objection then to approving it and making further changes through PRs and issues? All right. Thank you, Mike, for all that. I appreciate it. And whoops, wrong one. I'm going to drop off, Sam. Okay. So we'll save the GraphQL for later, right? Oh, I can talk about it for three minutes, but I'm probably not enough to get into it deep. I don't think there's been any comments on the issue. Yeah. I was on vacation. So that's why I probably hadn't done that. Yeah. Let's hold off on that discussion. I think people have slipped their mind or we need to talk about it on the call, one of the two, and either way, you need to run. Yeah. So we'll see you next week. Yeah. Okay. Thanks, Mike. All right. Clemens, first of all, oh, go ahead. No, you want to speak up. Well, I just wanted to say that I think you pulled in the discovery spec by mistake. Yeah. And I don't, I need to go and see how I can go and fix this because I rebased and I rebased on master because the subscription API spec is already checked into master. Yeah. Well, we can work offline. Just want to make sure that, you know, we can merge it. Yeah. I need to find, so I think I've fixed everything that was, that feedback set, including, I just added earlier, the if present clause, the extra if present clause. Where was that? Scroll down a little bit more. Oh, here it is. It's present there. Yes. You also commented on this, like he says, so that was the last contentious point. And so the last thing I need to do is to figure out how to do the rebasing in a way that I am, this PR no longer lists those two files and I haven't been able to figure this out. So as soon as I'll have this, I'll have this, I probably won't have this today, but I'll let you know when I have that and you can go and merge it. Well, so let me ask a question. Obviously, the PR rebasing is important before we actually try to merge it. However, in terms of the content in here, what do people think? Does it seem like it's right, basically? I think all the normative texts are in these two blocks right here, right? And I think your changes over the last day or so were relatively minor, just more like syntactical kind of things. Yeah, it was literally just the if present. Yeah. Okay. Does anybody have any questions, comments? Francesco, I assume you're okay with this, since I know you've been doing some reviews and going back and forth with Clemens. Yeah, it's fine for me. Okay. Anybody else have any comments, questions? Okay. Is there any objection then to approving this conditionally upon fixing the rebase issue? Okay. Not hearing objections. We will approve that. Issue is fixed. All right. Excellent. Thank you. Technically, that's the end of the agenda, except for something completely different that Clemens wants to talk about. And just to make sure, are there any other cloud events or two new spec issues people want to bring up before we move on to Clemens' last topic? Okay. Not hearing any Clemens, the floor is yours. Yeah, I'm going to make this, I'm going to make this brief. Amongst the topics that we had on our list of things that we wanted to go and start tackling in this group, and then we picked subscription to discovery first was schema registry. That's now becoming quite a hot topic. I know that Tim from AWS and also as we were having this discussion said, he'd be interested to throw their schema registry interface into the ring. And so we're seeing, effectively we're seeing an increasing need for having a standardized schema registry. As we have, obviously, pointers to schemas and cloud events, and we need to be able to go and store them somewhere. There is around Kafka, there's a popular schema registry, which is unfortunately under a preparatory license that customers are using. And there needs to be something that is unified and open that everybody can use. And so the serverless working group here and I think in particular the cloud events effort would be great place to define a common schema registry. And I would be delighted if we found a subworking group that could sit together and compare notes on existing schema registry drafts. We have one. And then can probably come up with a spec that defines a simple model for a common approach for schema registry. The way I think about this really is it's nothing more than a crowd service that allows you to store and then retrieve serialization schemas from a central place so that you can go and serialize the cloud events payload in Avro at the publisher side and then as you receive the event, you can go and take a look at the schema URL and you can go and pull this out. Serialization and validation schemas is the question, yes both. I think of that as a, it's ultimately a text file store that then might be a little smarter about the, might then be a little smarter about, you know, upgradability and compilability, et cetera, et cetera, might have some logic to it. But ultimately, I think the minimal thing is a simple REST API that allows you to store an Avro schema and then reference an Avro schema and have a mechanism for how you can go and create a URL which probably has an access token in it, et cetera. So nothing complicated, but something that we can all agree on that we can all implement and that then gives a common way to handle those serialization and validation schemas. Okay, there are a couple of hands up. I think my one's up first. So quick clarifying questions. It sounds like you were talking about at least defining some sort of specification, but it wasn't clear to me whether you're also looking for this organization or the CNCF to also host a central schema registry. No, I think of that as a software component that we all, where we define the interface. And then the schema registry is something that is so specific to particular applications. Don't think this is something where you need to have a grand sign for a repository. If someone wants to build one, that's great. And if that adheres to the same interface even better. But I think of this mostly as a drop effort first and not as a grand registry for all the schemas in the world in the sky. We had that once with Unity.io that didn't work well. Right. Okay, cool. Thank you. Ryan, your hands up next. I think this is a natural topic for us to cover. Even if it doesn't turn into anything, I think it's something that everyone that I've talked to that is doing something similar has to do anyway. We might as well cover it. I guess one question that I have is how specific or generic are you thinking of being? You mentioned Avro, but are you thinking that this should be generic and supportive of any kind of schema technology? They all vary in slightly different and interesting ways. Just curious if you have any thoughts on that. I think the schema registry needs to have a notion of the type of schema, but that's really, but then otherwise it's mostly just files. You would store a schema, you would say this is Avro schema, and then you would store the Avro schema with it. And you should probably be able to go and search for schemas that are Avro schema. So I have some level of discoverability. But otherwise, schemas in general are typically text files that adhere to some common meta schema, I should say. And so there might be a facility that makes sure that if you're submitting an Avro schema that the schema is actually a valid Avro schema. And then it might be, if you have JSON schema, then it might go and check that this is JSON schema. There should be a sensibility that the implementation of that same interface can also accommodate any other schemas that you might want. So it's really about creating a common interface that all serialization libraries and validation libraries can rely on. Because ultimately for cloud events, the way how this is all shaping up is we have a multitude of different products which are going to support cloud events through a multitude of different transports. And we will have a way to push into a network of connected transports, where you push an event in on one side and then you get the event out on the other side. One is doing the publishing in C-sharp, the other one is getting the event out in go. And there should be a common way for how libraries can obtain and decode the schema. It needs to be, and it might be that the way how you get at the schema is simply an HTTP get, and that's probably okay. But since there is no common, there's no definition for how that's working, I think we need to go and create one. So I'm not looking for anything that's enormously complicated, but I'm looking for a convention that is actually as simple as it can be, but one that all the implementations that we make and all serializer can then rely on. So since no one hands up, I'll ask another question. I guess that's the one thing I'm a little confused about. If in the end, there's a schema URL or URI somewhere in the cloud event that you got, and you just do an HTTP get on that URL or whatever transport is specified in the URL for the protocol, why do we need to actually have a spec for the user's side of it, or do we even need a spec for that? I think you need to have a spec that defines that that you need to have a spec that defines how what the rules are for storing that schema, how you can publish, you know, new schema. Right, no, I understand you may need one from the producers, from the schema owner's point of view. Yes. That I understand, but from the user's side of it, or the person pulling the schema down just to do validation, do we actually need a spec for that, or is it as simple as just you have a URL, do an HTTP get on it? You would still, I think there is still a set of rules where you would probably want to say, this is an avro schema, and interestingly enough, avro for its schema is not defined in MIME type. So there's some little things that you still have to get commonality around that are not as easy as you think, which are solved in these island solutions. But there is no standard around what the schema registry for messaging ought to look like, or for eventing ought to look like, and I think would be enormously helpful if we had more. Okay, thank you. Ryan, is your hand old or new? No, I was just going to say, that's one of the things I felt was somewhat hand wavy about the proper cloud event spec, in that there's a URL, but there's no way to, or at least it's not specified, how you interpret what gets returned. I think, Clemens, it's one of the things that you're getting at here. One of my questions, and I don't have a strong opinion about this, is this its own thing, or is this part of discovery? No, I think this is distinct, and the reason why I think it's distinct is because that registry, so I'm not sure even whether that's a cloud event that we should constrain this cloud events, per se, but I think that's something that might be useful for serverless, I think it belongs in our group, but it might be something that can stand in parallel to cloud events, but it's useful for cloud events. Let's put it this way. Yeah, actually, I think my only concern there is if it's too generic, and if it's only specifying things like those little nuances, like the content type, what's actually driving it, what it's responsible for and it's shaped. So one thing I can tell you, and that is, so here's the concrepancer. The schema registry that we see customers coming to us with, so there are several schema registries that are out there, and all of them are doing similar things, but they're all proprietary, and proprietary meaning they are not under an open source license of the sort that you can go and use them as you please. For instance, one of the schema registries that is popular is the one that has been made by Confluent, and Confluent decided to change the license on that thing in December 2018, and has made a clause into the license where it precludes any IS, SAS, or pass provider to use that schema registry, which means now customers come and they realize that they are locked into that particular schema registry. There is no standard interface, no common interface that anybody could go and implement that serialization libraries or that validation libraries could go and adhere to from a client side to speak to a common provider, and that sort of a lock-in for me is unacceptable, and there are no, there is no library out there in any of the, in any of the, no simple enough, let's move this way, library out there, any of the open source foundations except Apache Atlas, which is a very large metadata monstrous project that solves the problem of providing such a registry. So I'm interested in both having implementation, and we're doing this here in this project, we're having implementations of things also, and both interested in having an implementation of a registry, but more interested even in having just a common interface that everybody could both implement from the client side as well as from the server side that is not encumbered by any constraints of proprietary licensing. So, Clemente, I heard earlier on that that you talked about having a small number of people go off and do some initial look at this and then bring it back to the team. Yeah. What would be needed for success there? Um, I think that subgroup should come out with an implementable spec. You talked, you talked about AWS participating, right? Are there, do we, do we need representatives from certain companies, certain public clouds, etc.? Yeah, I would, I would like, I would like Tim to know about this and, and have a, have a voice in it and participate. And ideally, the, we're having, well, the usual suspects, the biggest cloud vendors are certainly part of the effort. So I would certainly want you guys, and I would certainly want Tim and Doug to be part of this. Well, it seems to, oh, Eric, are you going to say something? You come up here. Okay. It seems to me that from just a process perspective, one thing that might be useful is for you to write up an issue, Clemens, just describing what you want to do so that people who could not make the phone call could read it, comment on it, stuff like that. And if there is enough interest to make it another sub-project, then, and people agree that it falls within our domain, I don't see a problem with us starting that and, and doing it as another piece of work if people want to do. My only concern is that I would hope that it would not pull people's time away to, to prevent this from making progress on the other specs. That's my only concern. Yes, it's just something that is becoming more pressing as we see that it's, it's, that's a, that's a matter that's really starting to be painful for customers more than the subscription and discovery stuff is. Specifically as we have, you know, as we're adding cloud events and customers are, you know, using more and more binary encodings, et cetera. So that's something that becomes, that becomes a fairly pressing thing. Right. And, and we're, we will, we will in the not to distance future have some kind of schema registry functionality. And before that gets to work, we're happy to make changes early. It's just that as if we're shipping for a very long time, then obviously more and more applications get bolted to whatever proprietary solution we came up with. And so I would like to avoid having, having preparatory, preparatory approach for too long, but I would rather want to go and say, you know, this is effectively the preview and, and, and rather get to a harmonized solution early and probably even before we go GAA. Yep. Okay. Any questions, comments before we move on? So I'll write this up. Yep. Thank you, comments. Interesting topic. Thank you. All right. Any other topics for the call today? Otherwise, we'll wrap it up. Yes. There's one that is, we had our review with the sick app delivery yesterday from the serverless workflow subgroup. And we were asked and we already had those other projects on our related projects list to reach out to Argo and Tecton. Now I did that. And I think Alex Collins from Argo, he seemed to be really interested in an exchange of how they define their workflows. And from Tecton, I don't have a response yet, but the question now is if these readouts about how they define their serverless orchestrating workflows happens with us, would, would you guys be interested to have it in the work group serverless or should we do it entirely within the workflows? Any comment on that? Not hearing any. I, I think workflow makes the most sense to me just because I wouldn't want to supplement or things too much, but that's just my personal opinion, not knowing much about it, to be honest. Anybody else want to comment at all? This may be a decision more for the workflow subgroup to actually decide for itself, wouldn't it? Yeah, sure. I just thought this is a broader audience. And if you guys are interested, we could have it in the weekly serverless. I know this is a fixed date on the calendar, whereas ours is Monday every four weeks. So might not be a good time for everybody to join. I just wanted to ask, and I'm good with also these are weekly, and we have our next community meeting in May. So, yeah. Okay. Yeah. I think since no one's speaking up, it sounds like you may just keep it within the workflow subgroup for right now. Yeah. Thanks. Okay, cool. All right. Anything else? All right. In that case, final roll call before we jump over to the SDK call. David Baldwin, you there? Oh, yes. Hello. Doug, I got you. Ahmed, are you there? Yes. Okay, excellent. And Falco, you there? I'm sorry, Nicholas. Wait, Nicholas? Oh, we lost Nicholas. Okay, Falco. I see you coming off mute, you're there, right? Yes, I'm here. And Mr. Scott? All right. Did I miss anybody? I don't think so. Hey, Dustin Ingram here. Oh, Dustin. Yes, thank you. I don't know why I even saw your name, but I skipped over it. All right. Cool. All right. In that case, I believe we're done. Thank you, everybody. And if you want to join the SDK call, stick on the line. Otherwise, have a good day, everybody. Bye. Okay. Thank you, buddy. Yep. Let's just give 30 more seconds until I get started. So, Scott, while we're waiting, how did the Knative community call go? It was overall good. A couple presentations and updates and whatnot. And then an interesting demo on some blockchain stuff. But there was a little bit of networking problems. And I think the rule of thumb should be if you're going to do demos on Knative and or Kubernetes, you don't run it from your laptop while you're presenting from your laptop. Yeah. So, use a cloud. Makes everything better. Yeah. All right. It was interesting. All right. Cool. I assume they recorded it, right? Yeah, yeah. They're going to try to do a more streamlined demo to lace into the presentation that you send to YouTube. All right. Cool. Thank you. All right. Francesco, I think you had the only topics. Oh, that's the only one now. But go ahead, Francesco. You can go first with yours. Yeah. So, first, I'm progressing on SDK Rust. So, as you can see from this cool board of GitHub, actually, there are a couple of things which are in progress. And what I'm still missing, it's the encoders for HTTP, which is a working progress, but it's going really well. And I hope in two weeks to have a release. Just a couple of questions for you, Doug. First, do we have any way to promote the SDK, the new SDK, like blog post? Or I know that on the website, there is like on cloud.events.io, I see on the top nav bar SDKs. And like, how can I add to the nav bar, the SDK? Yeah. Questions like this. I mean, there is anything I should do for this release or like to promote the SDK? Yeah, I'm trying to think. Obviously, if you want to do another blog post type thing like this, I think just submit a poor request to the website repo. Other types of promotions, I'm not sure if we have anything else other than this. I mean, obviously, we can from the Twitter, from the cloud.events Twitter account, we can blog if you do, or sorry, tweet about it if you do a blog with a pointer to it. Well, yeah, I can do a blog post on my personal blog. But of course, it doesn't have the same visibility of blogging here. No, I think you can do a poor request and to get it added to here. I don't think that's a problem. At least I don't think anybody would have a concern with that because obviously SDKs are part of our organization. So it makes perfect sense to do it here. If you just submit a poor request to the repo, I think that should all be all that's needed. Okay. Anybody have any objections that or think? I guess a meta comment or meta issue that we could talk about is should we have a static blog site as part of the cloud events website to allow for blogs to be posted directly on here? Yeah, because having the announcement here, I think it's not correct. I mean, we should have like a book, like some kind of look. Yeah, I think he's right. So just want to make sure, would it be a, would still be under cloud events.io, right? Just a separate blog section? Or is there, are you guys thinking about something different? No, I'm thinking about under cloud events.io, but allowing for static content as blogs. Okay. I'll reach out to a camera, the gentleman's name who did the last revamp of our website. I can't imagine that'd be very difficult to do. I'll reach out to him and see if he can pull that together unless someone else understands the framework that the website uses. I just don't know it myself. I think I do. Okay. You want to take a stab at putting the other rough templates, Scott? For adding extra blog posts? Yeah, with a permanent sort of spot on the website for blog posts as opposed to announcements, which is what I think this one, I'm sorry, which I think is what this is more about. This is just announcements as opposed to permanent blogs. I think this is Hugo and if that's the case, that's exactly what my blog is. Okay. I don't know how long, I'm not sure it is Hugo. GitHub. Cloud events. Don't you see my blog post? I did, but I didn't get a chance to read it. I clicked on it and then I got distracted. I apologize. So the answer is no. I think Scott, that your blog post could be cool to repost on the official cloud events look. Yeah. I'm hoping for more of like a, right? Like the cloud events.io can't, there's no snark there and there shouldn't be an icon. So I assume because we have this comment right here, I think you're right. He probably is using Hugo. Yeah. I think it's Hugo. So yeah, it's not hard to inject. Actually, it's using the blog plugin. So yeah, this is really easy to do. Okay. I mean, yes, if you could do a PR, then Francisco can do another PR after that to add the very first blog entry. Look at the comment around the assets sass. Make blog link display conditional. Yeah. I'm sure we can, this looks easy. Okay. Cool. Okay. There you go, Francisco. I think the other side of this is understanding how we get content out onto the Twitter account as well, because I think right now it's just dug in myself that can tweet out about things and we need a more formal process to make sure that we can get things out on, on a timely basis with the right content. Would it make sense to allow people to suggest tweets by just doing something like open up an issue someplace that we monitor? Or do you want to make it more like they send you and I and, you know, a ping? I don't know that I have an answer. I think there's full professionals that do this job. Well, I know Mark neither. I don't think Mark or I want to make this a full-time job either. So yeah, I think what the pros do is they set up like a backlog of content that goes out on a schedule and you just add to the backlog and it goes out. So there's like constant engagement and it appears that your brand is actually thriving, but it's every, you know, some prime number of hours. Sounds like a bot. I don't know. I guess we could think about this. I just don't have an answer either. The only reason that I focused on issues is because that tends to be something that will nag me, right? Or an email from somebody will nag me because it sits in my to-do list. And I don't want people to feel like, oh, the only way to get something out there is to know about some secret process and some secret person to ping. Whereas if it's an issue, then we could say open it up here and the owners of this repo are responsible for watching it. Maybe we can give a GitHub action so that they'll auto-post it. I'm sensing abuse. Well, let's think about that. Let's wait until we get Francisco's blog up there. Then we can worry about how to formalize the process for a tweet. How's that? Yep. Okay. Okay. So Francisco, is there anything else related to this first issue you had? Do you want to talk about? No, not really. I mean, I need to bail. I'll talk to you later. Okay. Bye, Mark. So the other thing is that if there's anybody that wants to contribute to SDK Rust, we need it because do you remember that we initially talked about this SDK with other two guys, but they didn't really contribute it. So I'm doing this basically all alone. Yeah, that I don't know. It's a recurring theme I'm hearing. Yeah. I really hope that when we advertise it, somebody will pop up. Yeah. That's one of the downsides to being open source, right? You got to wait for people to volunteer. Of course. Okay. What's your next wave? Java SDK? Yeah. So I'm continuing to do a progress on real work in the SDK Java, but to be frank, I'm mostly rewriting the SDK in one PR and I don't like it. I mean, no, I mean, the promise that I opened at the, obviously the Pandora base, that's the way to say in English. Anyway, I changed it a couple of bits and I ended up changing everything. So I would love to be more incremental on this change. I know that Fabio also gave me some tips, but I mean, nothing more than this. I'm not sure. Do you have any ideas on how can I progress on this? I also know that there is some interest for doing these changes, but from my internal side, but I know that also other companies are interested in changing the SDK Java as is now. Anybody have any comments? I'll second that. Second way, second, which thing? That it needs a little bit of a rework, I think. I think this has happened in the right direction. I haven't actually reviewed the PR myself though. Yeah. I mean, Francisco, to me, if Fabio is okay with the direction you're headed, I mean, it sounds like he is. I'm not following it personally, but based upon this comment right here, it sounds like he's okay with all the work that you're doing. It may feel awkward to have a PR with almost 80 files, but if it's the right direction and he's okay with it, you know. Okay. Just the thing is that this needs to trigger another major release of this, because I mean, we can do a minor release after these changes because I've changed the work. Yeah. So we need a major release after this. Yeah. I mean, to me, this is a decision for the Java SDK authors to decide and that implies basically you and Fabio for the most part, right? I'm still not an author. Well, no, aren't you? I thought, see, what's interesting is I think you actually can merge the PR. Check it, because I believe all SDK maintainers are maintainers for all SDKs. Yes. I can merge the PR, but I won't. I mean... Well, at that point, I think basically you need to poke on Fabio to do the actual merge then or do a review and merge. Well, I would love to have inputs from other people other than Fabio. I mean, also from Fabio, but from other people too will be interesting because it's a big, big change. Well, Scott, do you think you might have time to take a look at it? I did. I think it's just like with the Go SDK, we need to take what's there, think about the lessons learned and make it a new. Like click on my little demo app and then scroll down and then punch yourself right in the face. Because this is not what you should be having to do to send and receive Cloud events. Those annotations of Spring are Juxer-res annotations, right? I think that they use Jux in the internals or something, but they're actually Spring class. Okay, so I mean, if I create a decoder and then encoder for Juxer-res, it will work also for Spring, right? Yeah, that would really help. I think that's the way people really want to use this stuff. And so if you scroll up to the other example, one scroll up, please. Sorry, I distracted. So this is getting. So basically this demo is if you post just to root, you store a Cloud event. And then if you get at root, you get that same event back out. And so you can kind of see the lines 31 to 45 is how you set up with the builders, which is fine. But the builder, it's not like you can set up the base event and then build more off of that like a builder might or like a factory might. So I think the builder ultimately is fairly awkward because it's very static and not really adjustable. And then the wire thing is really difficult to use with the Spring. This is the Spring framework because it has to be in Spring responses to actually send the entity out. Another thing that I want to echo about the wire is that it assumes that Adders are a map of string object. That's not always the case. That's not the case in Spring, is it? Yeah, but not even in Netty. I mean, in Netty, Adders are multi-maps. So ash map of string of list of strings. That's great. It should be an array. It's a map of arrays of strings or a map of a map of a map or whatever. Yeah. Anyway, I went through this exercise and it was enlightening. I don't really think I can give the thumbs up on trying to get people to use the Java SDK with Spring because it's too hard. Same here. The fact that I have to get the request body as a string, if you scroll down a few lines and then slam that into this unmartialer, but I have to know that I'm going to unmartial binary doesn't really work. And that's kind of like counter to everything we've been trying to do with the Go SDK. You take an active request and you turn it into an event. But for Java, in this implementation, you have to know ahead of time that I'm going to try to do binary unmartialing. And you also have to know the spec version. Right. Which are things that are supposed to be inside the request in the body. So I would rather see a cloud events unmartialer that you give the HTTP headers and body and you get back a cloud event that's parsed based on what's there, not what you know ahead of time. Makes sense. So I think, yeah, there's a lot of usability things that aren't really how I would use Java. Well, Francesco, in the end, like I said, I think it's going to be between you and Fabio and he sounds okay with it. And I know it make you uncomfortable that you're not getting enough reviews. But why don't you make a branch? Well, how do you feel? I think you should make a V2 branch and work out of that. I think it should be the contrary. The actual master should go in a V1 branch. And then we put a master V2. That works too. Master is the head of line in development thing. Here you go. I can do that. And I think what I'm going to do now to merge this one is first to make the core module working. And I disabled from the compilation of the other models because none of the other models will work. And then I was slowly incrementally enable the other models. And we can also start putting new stuff like the one you said, Scott, about annotations, which I frankly didn't understand what it's about. And the juxtapress, Marshall or Marshall? Yeah, the Spring team is currently working on official Spring bindings for cloud events. But they weren't, like I pointed them at the Java SDK to help kind of decode stuff, but they looked at it and it's not in a way that they can actually leverage to use in a common way because it's just too much work. I think that Doug, you should tell me about is as soon as I have this PR merged, I would love to publish a snapshot. How do I do that? We should start using GitHub actions. We can store secrets inside of GitHub and then we can write, it's a very primitive containers that you can run inside of GitHub based on pull requests or tags or branches or whatever. So we can like, we actually have a demo now in another repo where you can make a tag and then that tag produces a build, does all the testing, and then produces a release for you based on the tag. And then it pushes those images up to a registry with secrets that are baked into the repo, which is the task I was supposed to look at. And so I did and it works. There you go. One finger time. I think that the shorter answer is, is it Fabiano? Fabio. Fabio has the keys to the Maven repo. Oh, well, well, now that I look at it, it looks like the Travis, when I merge on my, Travis on master, it actually deploys a snapshot in Maven central. So it looks like it does now. So maybe it just needs to change the version. But yeah, the long answer is having some kind of automation because I can publish on a snapshot this way. I can publish releases. Is it cool that the keys are baked in there? I was, yeah, I was going to move away from it real quickly. Oh, that's a public file. Yes, it is. I wasn't quite sure what I was looking at there. So I thought I, and we are being recorded. So I was going to get out of there real quick. Yeah, they should be encrypted. Oh, really? It says public keys. I don't know. But yeah, like that's exactly the use case for GitHub Actions. And if you click on, like, I'll show you where they would end up. If you go to settings, settings, and then you click on secrets. So GitHub has this whole thing where you can add new secrets and stuff and they become encrypted in the actions. And you only maintainers of the, this project can see the secrets. So it's pretty cool. Yeah. Okay. I was going to, I'm a little busy right now. But I think in the next couple of weeks, I'm going to try to move the go SDK out of Circle CLI and onto GitHub Actions. We've been learning how to use GitHub Actions a little bit. It sounds like you like them. It's pretty nifty. The fact that like, they basically did a, if this, then that, based on GitHub events, and then it's centered around a tecton like flow. Cool. Yeah. Okay. If you manage to port also the containers to run Kafka and Coupid router while running the test, it's really cool. We, yeah, we, we actually have a demo of testing Knative in Mink in GitHub Actions. Sorry, in kind. We run kind on the GitHub Action and run end-to-end tests. That's really cool. Inception of containers. Yeah. It's a, it's, it's all, it's all sorts of fun. And I think all of that runs in Azure. So it's, it's Azure containers running kind Docker container Kubernetes in a container, all in a container. It's, it's the fact that it works is insane. It hurts my head to think about. Oh my. All right. Anything else, Francisco, for your issue? I said it. Okay. Dustin, you're up. Yeah. So along those lines, based on what we were talking about two weeks ago, made a PR to switch the Python SDK from CircleCI to GitHub Actions. And since I'm not a collaborator, didn't actually run it in the PR, but you can see it on my branch that runs and completes outside of support for Python 38. And I don't know about for the other SDKs, but for the Python one, CircleCI is just broken like it. I'm not really sure what it's missing, but it doesn't work right now. So all PRs fail. So yeah, I went ahead and did this. So who's the maintainer of this one? I apologize. I should probably know this, but I don't. Is it Dennis? He, he hasn't done anything in a long time, right? Doesn't seem to me like much is happening here right now. So let me ask a bold question. Should I just approve your PR? And make you a maintainer? Do it. Yeah, I would, I would, I'd appreciate that. Let's just do it then. I mean, are you okay with squashing it first? Well, I will just make, make a, that's true. Let me just do it the easy way. Dennis will do that. Yeah. It'd be good to see it run on this PR as well as a collaborator. I'll do it after the meeting. It's, wow, you managed to get DI as your name and GitHub. That's really cool. Just nice and short. Okay. Dustin, since, since you are the new maintainer of can you also prove the PR to have a 1.2 support? Yeah. Yeah. So that's the other thing. I mean, yep, I agree. And actually the next topic I want to talk about here is I sort of have like an overarching issue in that repo about sort of the same thing we're talking about the Java SDK, which is that I think it needs basically a complete rework. So some feedback here would be awesome. I wasn't planning on making a PR that just completely read does it, but I could do that as well. That's how we want to move forward with it. But basically wanting to get some thoughts here about whether it's worth doing. It seems like if we're doing similar things for Java, making it more in line with what the Rust SDK is going to look like would be ideal, I think. Well, what I did is that, I mean, I created Java like a Java style APIs, but I kept the same concept of messages, which is the one that we applied in SDK go to Marshall and then Marshall back and forth, HP, Kafka, whatsoever. And then I shaped the event that a structure to don't be mapped to JSON because the problem that the SDK Java had, and I think it's more or less the same for SDK Python, if I remember correctly, is that it's decided to the data structure is decided to be easy to use with Marshall and a Marshall to back and forth JSON. Right? I think anything we can do to make this more easy. I often hear the Python SDK is very difficult to use. And so people really want to use Python and go, or sorry, Python and cloud events, but the SDK just doesn't work. And so people will get stuck. Yeah, that's basically why I made this issue. Yeah, I mean, it doesn't work because it supports 0.1 and 0.2. I mean, I think that's the biggest issue now with the SDK Python. It needs at least to support the latest versions and remove 0.1 and 0.2. Yeah, so I mean, as a new maintainer, I think we definitely want to add support now for the earlier versions of the spec if the PRs exist. But yeah, I think when we move forward to supporting 1.0, it might be time to rework it a little bit as well. Thank you. Big plus one for me. Same. All right, so I can work on a better, more detailed proposal on how that might look before I start actually writing code as well. And yeah, I'll loop in anyone that's interested. Yeah, these kinds of documentations help a lot. Like when I'm working on the going SDK, I need to write what the integration is going to look like from an integrator so that they can so that we can see, like, how's this UX going to be? Yeah, but you should have an invite in your inbox, and Dustin, so you should be a maintainer now once you approve the invite. Okay, thank you. Yep, have at it. Enjoy the power. All right, Lance. Well, I wasn't actually planning on talking about anything during meeting today, but some of the conversation has brought up some questions that I have about the JavaScript SDK. I've got a handful of PRs that are outstanding there, and some of them are big, and I've been wanting to do even bigger ones. And I guess I'm wondering what the, you know, well, a couple things. One, CircleCI was mentioned as the CI tool for Python. JavaScript is using Travis. And then, you know, I guess there's the GitHub actions is the direction that some folks are wanting to move. Is there, number one, is there a standard that, you know, everything should be using? And number two, how can I make, you know, get just a little more traction on some of these pull requests and have a little more confidence that the things that I'm submitting that are potentially big might get some disability? You are via maintainer? Well, I mean, I could totally do that. I maintain plenty of repositories, but, you know, I've never even really had a conversation with Fabio. I'm kind of new to the organization and everything. So, you know, I understand if that's not necessarily legit right out of the box. But I would like to have a little bit more, you know, just traction on some of the stuff that we're submitting there. It seems to me that for some of these things, if you can get Fabio to at least comment on them, I think that's a step in the right direction. And if he doesn't, it doesn't seem like he's busy enough to review them properly and to approve them. But he doesn't seem necessarily to be against the direction you want to go. I'm inclined to do the same thing with you that we just did with Dustin and say, you know, we don't want to block things. And if you're going to do the work, go for it. I'll make you a container. Okay. Well, I mean, I guess I can start by just pinging Fabio on these outstanding PRs. Yeah. I start there because like I said in the past, he sometimes gets pulled off on other stuff. But usually if you ping him, he will usually respond. But if not, you know, I mean, just to give you a little bit of history, most of the maintainers of these SDKs were made maintainers because they were there when the project got started. And they said, hey, I want to work on this. So it's not like they had to meet some minimum bar other than to raise their hand. So it would be really, it's unfortunate that we don't have a higher bar than that right now. But if you're going to raise your hand and to work on it, and it's going to move the thing forward in the right direction, then that's the best the bar we have. Yeah. Okay. And I think we have a couple of other folks at Red Hat who would be interested in contributing to it as well. So. Okay. Cool. All right. Anything else for the call today? I got a bounce. Thank you. All right. Thank you, everybody. We'll talk next time. Okay. Bye. Same chicken. Bye-bye.