 Hello Good Can you be a favorite and type your name into the chat in your company if you want to be associated with a company Just like a get it correct on the attendee list Thank you and hey Tommy Amen well Hey hi Hines Good morning or good afternoon. Sorry Almost afternoon for me anyway I See Jojo What's your last name? Actually, I think you've been on the call before haven't you? Be one. Yeah, that's what I thought. Okay. Yeah, it's buzzing. I put on the chat. That's Michelin. Okay, Baldwin. Okay Okay, cuz I thought I remember the name Jojo before Hey Klaus, hey Doug morning ginger ginger either I am good morning Doug. Good morning morning Colin Good morning. Hey Scott. How's it going? Morning. Hey Mike. Hello Morning, right? Hello Vinay either Mark Mark It's been a while Yeah, I assume you're probably deep in the in the move or something like that, right? I I That that has been part of the challenge your Ooh pretty That's that's a shot out my my Office window nice morning Jeff Hey, good morning and morning Jim Morning, there's someone else there. Oh nacho you there Morning Vinay, are you there? I know we're calling I'm not coming Clemens me paying Clemens Morning arc Morning, Ryan. Are you there? I think we lost Ryan And there's Clemens as we came to Clemens You can ignore my slack message to you. All right. It's three after us. Just have a good everybody don't get started. I Think so. Okay. We'll circle back around for you like guys later All right status So Interesting, so we still haven't actually written up a chart or anything for a sake because we couldn't figure out what we wanted to do yet however the Workflow subgroup that we have is trying to go forward to the sandbox project and the TOC is trying to figure out Which SIG should actually review that going forward and they're bouncing back between SIG apps and SIG runtime I don't think either one's actually a very good fit to be honest So I put a comment out there that maybe we should just bite the bullet and create a SIG serverless for these projects that don't really quite fit Haven't heard back from them yet. I'm going to ping the TOC on that thought. So if you guys Have any opinions on that, you know, join the thread in the issue that was opened up in the TOC for the workflow project But I just thought I'd let you guys know that that may be coming to a head soon Jam just a slight reminder that you have an AI and Clemens to the reminder that you have an AI down here I mean you guys get a chance I spent all the cycles this week on the Getting the subscription API document done. Yeah in shape for you all to look at. Yep. That's what I figured That's why I didn't bug you but just reminder to you guys Let's see. Thank you, Ryan All right SDK so we do a community time anything from the community if you want to bring up this not on the agenda moving forward then SDK call. We do have one schedule for today. Immediately following this call. I don't know if they have any topics to to talk about but we will have we will at least have a very brief call right after this one just reminder for people. I don't see Kathy on the calls or anybody else from the workflow subgroup on the call that wants to give status. Okay. I don't see anybody or hear anybody. All right, so let's jump into the PRs so Mike, let's go to you first. I assume you don't want to look at the old doc right you just want to look at the PRs. Yeah, I mean the docs probably had a date at this point. Yeah, I figured. Okay, so hold on a sec. All right, anything you want to bring up to date for people. So if you want to go to The thing at the bottom that big table we added last night to do this stuff. Yeah. So like there's a question here that I would like to feedback from folks on is is trying to bundle this stuff up into one API call useful. Should we make it to API calls. So where You know this I this first one here of discovery with expand sources equal false. It's equivalent of like, tell me all the producer type of tuples that you know about Whereas the expand sources true gives you down until like, hey, what are the sources that I can actually can actually get at If you look at through the comments history and the PR, you know, Doug had put together an example of like how things didn't like wildly get out of control. And that's kind of what I'm worried about here as well as if you are actually letting discovery be sort of a directory over the sources that you have. And again, I'm thinking about You know, building a intelligible UX, whether it's CLI or UI here where I can, you can imagine somebody goes in and clicks The four or five services on your platform that they subscribe to they go, you know, go into the storage one. They see the different life cycle events for storage things. And when they click one, they're given a drop down of all of these these sources that they could subscribe to. So that seems to be how a lot of existing systems work. What's the what's the scenario that you're trying to what did you try to solve in the first place because I would think so in my mind the Completely automated discovery scenario where you effectively you're doing something like a DNS style look up is more important than how that's displayed in the UX, frankly. Not sure I follow what an automated look up would be for the discovery mechanism. I would think I would think that I walk up that I walk up from programmatically to a repository and I have a few criteria that I that I have. And then I get back a list of Subscription managers effectively, which are providing those those elements, but I'm not sure there's always human interaction here. I think in that case of if you have all of the information. I know I know the source. I know the type. The discovery would narrow you in very quickly to a subset of valid places to send subscription request to I guess, I guess maybe that's a question that I assume we are answering is if we're trying to solve the human interaction case, which I think if you look at existing event products in the marketplace tends to be how this is done. Whether it's going through AWS Lambda and clicking a flow or Google Cloud Functions. I'm looking for a mechanism that allows me to resolve a Type of event to address where I can go and get that kind of event with with some further criteria, but the UX is something that is very important. I mean, I'm looking for I'm looking for a mechanism that allows me to resolve a Type of event to address where I can go and get that kind of event with with some further criteria, but the UX is something that is From the scenarios I haven't I haven't might not so not so important so I'm wondering I'm wondering what the input scenarios are here. I anybody else on the call would like to comment on on what we should be solving for I guess I'm I'm not 100% sure I'm following Clemens I think what you're saying is you want to walk up to a To a discovery endpoint and say I'm looking for this particular event type right. Yeah, is that not what Mike was allowing for down here it's just you do get other information like it's coming from GitHub is supposed something else. But if you're looking for get that poor request the output of this would allow you to get that information right. Yeah, I just want I just wonder what the data format is I think that's like the way how this is displayed is something that I find difficult to difficult to standardize. Oh, oh, you want this. You want to see the like the JSON output. Yeah, yeah, yeah, because you know, because we have if we're displaying these things we're just trying to get that information. We're going to in Azure we're going to display those as we do this for every service for everything in Azure and there's going to be a particular style of list and we're going to do this in a certain way then the AWS folks are going to do the their style the IBM folks about their style because these are all things which are happening in the context of greater portals, etc. So everybody has a way to go and deal with deal with that and that's something that is the decision of how those things are just a design are so far away from for me as a service owner that I don't even get to influence that. So for me, so for me that seems like it trying to standardize the look of it is an uphill battle that is that is difficult but but standardizing what the data structures look like. Yeah, sorry, this is not this is not standardizing the look this is this is me being lazy and instead of getting me like lines and lines of JSON output I'm trying to give something that's a little more human consumable for talking about this. Okay, sorry. So imagine that each one of those lines on that table is a, you know, like a JSON structure that contains all of the attributes defined further up in the doc. Alright, so I misunderstood that sorry. No worries. Yeah, that was my assumption I thought this was interesting from an understanding perspective but yeah I wanted to see this part and I figured that was just a point in time snapshot of where. Well so if people agree with this like human readable thing and like we agree on the concepts, I will go right the JSON, which is a you know much more of a pain to do. So, I'll make sure we agree on the concepts and then I can do the detailed work. Okay, but I do think Mike's original question is a really good one for him at this stage which is he basically has a single API. Let me see if like where is it he basically has a single API. But then he has an option that says well do you want me to expand it or, you know, or not where this is the non expand version, but then if you expand it out you're going to start seeing duplicates. Because it's actually no second to this it's just this part gets expanded which could be really large. Well and it can be expensive to right so if I if I'm providing an events broker for an entire cloud platform that has you know 80 or 100 products and I have to go through and expand for each event type now that per product expansion they only have to be done one server side. But like there's a question about what level should you see this detail. So it would be fair to think about this and like try to do it in one universal API, try to do it in two or even even try to do it in three, where like I can only get those sources expanded for a single producer event type at a time. So I know some of the folks on on on the Google side that are thinking about those are concerned about performance and the ability to go get this data. And then really that's that another fundamental question is are we providing a generic facility over discovery of a valid sources because like if you look at event providers, oftentimes that's a criteria for the subscription so like, I can't go to GitHub and say hey give me all pool requests that I care about. I have to go say give me pull requests for repo ID, seven, or whatever their IDs, their ID structure is, and having the facility in the discovery of knowing which sort which repose I have access to is an interesting thing to think about from a like a human interaction place. We have to pull their hands up so Ryan I think you might have been first. Yeah, I was going to say, I shared the concern about performance, because the number of sources could potentially be unbounded, and returning them all continuously in a single result set for all producers might might get a little bit unwieldy. So what I what I was thinking is is maybe thinking about this in a more restful way, where instead of your query parameters there are some resources of of the within the URL that would, you know, scope the sources to a particular producer or type. At least that's how I was thinking about it, and you could you could potentially expose an endpoint where you could get a more broader set of sources that maybe functioned more like a search endpoint rather than a restaurant point, but just my thoughts. No, that's really good. Thanks. And Gem, I think you're next. Yeah, and I guess I was sort of thinking the same thing as as you guys were talking and I wonder if this would be better represented as as a graph QL style endpoint, because there could be so many different ways that you might want to query this I think it becomes really problematic, you know, to try and enumerate all the different access patterns through through a classic sort of rusty style interface. So, you know, modeling it in terms of the entities and how they're related and then exposing that through graph QL so that you can then query it in whichever way makes sense to you might might be another option. Class you're next and then I'll go. Yeah, so I mentioned it already in the comments in this PR. So I wonder if the term producer is the right one here confused me in the beginning, because in our terminology we have defined producers. Quote the specific instance process or device that creates the data structure describing the event. So you propose duck I think you propose this term source type so that I don't know I found that more intuitive I have to admit. Okay, I was gonna comment that Mike I know you and I were talking last night and this this sort of seemed okay to me then but the more we're talking about the one query that I'm a little unsure of is how do I just get the list of types regardless of the producer or source or anything else. And it may not be that big of an issue if the if if there's only one producer per type. But if you have lots of users. Yeah. Yeah, I mean we can certainly think about different ways to hold this right if we need that if discovery is a collection of API instead of a single API. Like that's a possible so you can imagine like say hey give me all the types. And from that, I might also get all I'm just going to use producer because I don't have we don't read upon term yet. I would have a list of producers that might have that particular type. Because in the end it matters right like I yeah I want storage storage you know object create events, but whether I'm getting that from Google Cloud Storage or from S3 matters. Right. Yeah. Yeah, I was the only reason I was, I was thinking I agree with you. I was just thinking slightly differently in terms of what are the events that this guy is going to shoot out at me. And then I can go figure out okay who's actually producing them but because when I actually subscribe yeah I'm going to want, you know, AWS versus Google versus IBM, even though they ask for the exact the exact same type yeah. I think we're assuming that they'll have the same type but. Yeah, different. The pipe dream from the subscription API perspective, because so the subscription and so the subscription manager is what we call our thing that is responsible for managing subscriptions. And ultimately also for distributing events that's the. That's the entity that you tell that you want events and it's also the entity ultimately which responsible for getting the events to you in some way. I think the discovery mechanism is what certainly one of the roles of it is to resolve a input criteria to the endpoint address of a subscription manager or subscription managers. And so the, I might actually come in with criteria that say, here's my, here's my GitHub repo address, which is the source. And here's my type of events that I'm expecting. Please give me the network endpoints where I can go and subscribe to those kinds of events that's a resolution step I'm expecting out of this discovery mechanism. And I like, and I also like I like the graph, the graph QL idea to basically make this a graph because you can, you might come with different kinds of criteria, which may be based simply in the simplest case on source and type, but also may have all kinds of criteria, which are metadata that is describing the source more closely, rather than just the, the URI that it is. But that's kind of the from it from how we set up the subscriptions API, we have a reliance more or less a discovery to do the work of providing that level of lookup capability so that it ultimately results to a network to a network URL. Yeah. In the way, even in the way I got the externally specified if you had that level of granularity I've got the source and the type, it would give you the information you need to create a subscription. But that doesn't mean that I mean, I think we're mirroring now is that we need to think about how to how to hold this slightly different way so. But I hear that use case as well. So it seems to me that what we really need is for people to comment on the PR about the exact set of flows or data path that they expect to follow to get the information that they need out of the system. Right. You know, like, for example, comments you want to start with with type or maybe source or something. And then you from there you want to get obviously ultimately to the URL you're going to submit to subscribe to but it already gets that right URL we need to know the exact bit of data and the order in which you want to traverse it right. So it seems like we need people to comment on there so so Mike can you use all that as input to figure out whether it's one query versus multiple queries or whatnot right. If you want to just like thing man slack. I mean I have a conversation or video chat one off. Yep. So let me ask a slightly different question. Obviously this is a really large issue for us in the spec and it needs to get resolved at some point, but is resolving this issue, a blocker to merge in this poor request. In other words, can we merge the poor request. We can do it today. And assume that we're just going to iterate on this through additional PRs, or do people feel like we need to resolve this first before we even have the first draft as a markdown file on a repo. I'm inclined to say let's I'm kind of say we merge but go ahead Jim. Yeah, I know it's about say I mean I, I haven't looked at this one in in its current form in the repo I mean is is it so covered in comments that you can't really follow it. I think that's, that's where I tend to get lost with big PRs. Yeah. I don't think it might correct me if I'm wrong but I think really most of the comments or not all of comments are really relatively minor and scope more like syntactical kind of things. I mean, if people want what we could do is I don't think there's any real rush, but we could wait till next week, since we didn't warn people we're going to possibly merge it this week we can give people another week, and then do the do an official yay or nay next week on the call. And I think, you know the way to wait until next week. It's reasonable for me to have a something slightly different to look at next week. I don't know if it'll be graph QL or not I have to go learn some things. Okay, I think whether it's graph QL or not I mean, you know, I know I remember who it was somebody sort of was suggesting a more resource centric model. Whether it ends up being graph QL or just a more structured resource style model. I mean, I think you still need to have a view on what those resources are or what those entities are and how they, how they map to one another. Yeah. Yeah, I mean basically I've done it very flat right now. Yeah. Just taking some notes here. Okay, so it sounds to me like we may want to wait one more week Mike you have some things in your head that you wanted to do. But then maybe we can look at possibly merging it next week is just a baseline to keep moving forward. So, so everybody think about that we obviously can still choose to wait and not merge it next week but I think it'd be really nice to get a rough draft like this in there because I think it's starting to shape up really nicely and Ryan your hands up. Oh, sorry I meant to lower it. But yeah, I'll just plus one that I think I'm sure a lot of other folks are in the same boat but haven't had a whole as much time as I would like to spend on looking at these things so other week would be appreciated from my side. Okay, that's good to know thank you. In case it's not clear because I know there are times when for new people on the call it may feel like sometimes putting on how things work out may feel like we're trying to rush some things, even though we only meet once a week and we have these deadlines of you know how to get changes in by Tuesday and stuff like that. It still could sometimes feel like we're rushing things so if at any point in time anybody wants more time please do not hesitate to raise your hand and ask for more time on stuff. Scott. Yeah, we could always merge it and then because we're not cutting a release or we could create issues and erase questions and make PRs at this thing because I think it's a it's a little hard to help edit this text when it's in PR form. That's one of the reasons why I was thinking maybe we merge it this week but I think we've heard enough from enough people that either Mike wants to make some changes for people want a little more time to review it and we didn't really warn people that we may merge it this week so I think out of fairness will give you one more week if that's okay with everybody, but then we'll try to push for merging next week. That way we have that baseline. That's unfair. Good to me. I'm not hearing any objection. Okay, thank you Mike for all the work you put on this I appreciate it the problem and let's switch over to lemons. I assume you want to start with the dock itself and not the API. Clemens you want to bring people to speed since I think you just put this in yesterday, not to make people probably had a chance. Well, I update I put this in on Monday, I think, and then I did a I think the I updated the I added the open API document yesterday. Okay. Yeah, so this is effectively just the marker version more or less of our clean the marker version of the document that adds shared. You can read this yourself and you should. Basically, this is, if you scroll, and I think I talked through this already, but just to get people remind people. So we have a bit of notations, notations that we have, we define what an event subscription is, I cleaned up some of the terminology. In particular, I think I mentioned that on the last call where I clarified what we mean with full style and push style delivery that's kind of at the bottom of the screen right now. There, you know, there's a, where, say, in a more wordy way, what we mean with pulling push pull means the consumer solicits the delivery versus this on the other side the subscription manager initiates the delivery this push style. And they're typically a little different in terms of how they're being set up because for allowing the consumer to solicit delivery, you need to have particular protocol model and you need to have gestures for that. That are anchored in the protocol and so subsequently underneath this I'm explaining those mechanisms. The goal, the reason why we're making the difference is it should be ultimately this, this, the, this whole discussion is about conformance. What the goal is here is to allow someone who builds a cloud event solution that is just using MQP broker or just using the MQP broker or just using Kafka. All of those should be compliant, because those are PubSub infrastructures to some degree. It should be possible to build a PubSub solution just with MQP, which means we need to, we need to have a house that's big enough for MQP to fit in, but then also for a subscriptions API to fit in. So what we are defining here, which is for this push style delivery where we're effectively having some software entity that we configure and then calls out into webhooks and delivers events and that's something that's obviously a little different than an MQP or MQP broker. So I'm trying to build a house that's big enough for all of those things to kind of coexist and to be able to claim conformance with the cloud event subscriptions API or subscription mechanism because conformance is in many places in many industries. So I'm trying to make that customers look at and ultimately this is about interrupts. So that's why I'm making the house a little bit bigger and have effectively two, if you will, competing definitions of what a subscription is. I'm immu writing the ones for MQT and MQP NATS and Kafka, which have those mechanisms so scroll down reworded, cleaned up some of the wording that we had in initial work drafts. And also mentioned that HTTP doesn't have that and therefore we need to have an API and here I'm now describing what that API is. First in the abstract. Defined with the subscription object looks like has an ID defines what the protocol delivery protocol looks like and then refers to protocol settings which are further down. The sync is the property that holds the network address effectively and then we have filters in the following sections, then we'll have we have the protocol settings if you scroll a little bit further even. So these are the settings for that exists for HTTP. I am. I'm probably going to kill the proxy. So I'm not, I'm not completely done with this yet. I think the proxy URL and proxy credentials, even though we discussed those I'm thinking we're going to toss those because those are not per endpoints considerations, I believe, but they are rather per considerations of the subscription manager per se, which means they're not in the right place here. And if so, I would have to have them for MPDT and MPP also for the WebSocket binding separately so I rethought that and probably going to go and drop those still. Then for MPDT we have the necessary settings for MPP we do. And then for Kafka and for NATS. Then we have the filter dialects. We want to allow multiple kinds of filter dialects potentially even a SQL style dialect, etc. But the one that we're going to define as required is the simple dialect here, which allows you to do exact match prefix matches and suffix matches. And then we're defining at the bottom with those. What that filter filter is, and then we're having some a few examples of these filters in here. And then follows effectively all the API operations that we have planned is describing create, retrieve, query, update and delete in the abstracts here. So what's the create operations should do. The goal of doing this in the abstracts and not doing that just in the open API document is that we also want to have certainly for AQP, which allows you to create these sorts of APIs, certainly want to have a also mapping. So that you can go and effectively manage relationships of that sort also through an AQP interface. So the create operation obviously creates a subscription and retrieve is a simple get. And then put down query is obviously a get from multiple. And then further down we have updates and also the lease of all the family abstract and then there is an HTTP binding and the HTTP binding is listed here as TBD but there is a open API documents. That is in the other documents. I'm doing this so well though, which effectively describes the that the HP mapping of that API as I've defined it so that's already mapping most of the error conditions has the subscription objects already mapped in the schema section. So effectively everything that I've specified in the in the narrative in the spec is already in this open API definition. And with what with all as I'm just seeing with my trademark. Teh type goal that I always make for the at the bottom. Oh, there was. Yeah. I do this. I do this all. If the document is not long enough, I do this. I do this very often. So that's that's effectively what I what I what I have and I would ask for you guys to start looking at this. I think the most content. Ultimately, I think this the open API definition is going to be the most contentious one because I think that some of you already have some sort of subscription API where you can go and configure a push mechanism. And I would love to. So this is an initial. So look at this as an initial proposal and this is something that I even look I even did. Ignoring any prior art we have a Microsoft so literally just we discussed this we came out with with some with some object model, and I just put that object model into an open API in open API definition without looking at the right. And if you, if you already have something like this, and you have a subscription API, I would like for you to take this as a proposal and effectively compare this against what you have. And then to make suggestions on how we can go and change this that said, I think it's most productive and that's also coming on the other document that since these are working graphs that we just go and commit those as they are into the into the repo and then handle those with comments on with with issues that might actually be the more productive way of dealing with this than rather than having a single PR hanging around for the longest time and having 10 discussions break out on different parts of that document. All right, thank you Clemens. So I think the proposal here is to see if we can merge this one next week, you know, let you guys have another week or so to look at it and look at it. But in the meantime, are there any questions or comments for Clemens. Nothing. That's quite unbelievable. I suspect most people haven't had a chance to look at it yet so I only got a chance last night so I apologize for not doing it sooner. Okay, well I'm not going to force you guys to look at it now or anything but obviously take your time to look at it. The goal here is to try to merge both PRs next week so unless you guys find something pretty significant. We should probably assume that we're going to try to do that and then work them through issues and poor requests and stuff like normal. Okay. Any last minute questions or comments on either document before we move back to the rest of the agenda. Okay. Cool. Well thank you guys very much for putting together the drafts. Hold on a minute. All right, cool. All right, so now let's get to a really easy PR first. So, I believe Nacho you proposed adding a link to Google cloud pubs sub binding for cloud events. Yes, that's right. Very easy little PR. Okay, any objections to doing this I believe that meets the criteria that we have any objection to approving. I'm sorry what was that. Let's add it. Okay, just make sure. All right. Thank you guys for that work appreciate it approved. Thank you. All right, this PR I believe is actually close when we talk about that so this one. We talked about this PR last week. And this is the one about how to determine whether someone should even try to parse a binary message as a cloud event or not. And an HP section here let me hide these. I talked about this text last week, and there were some minor wording tweaks, I think actually review the matter tweaks last week and everybody seemed to be okay with that. General direction so I made basically the exact same textual changes in a MQP Kafka MQTT. I'm not going to push the Merz this today because I just made the changes about an hour or two ago. So there's not enough time. So basically the exact same text the only changes things like properties versus headers, depending on the protocol, but please take a look when you get a chance. I would like to merge that next week. Hopefully it should go straight through, but please look at that when you get, when you guys look at it. I'm sorry when you get a chance later in the week. Hopefully that'll be easy. And I think that's it in terms of open PRs. So let me open it up. Are there any other topics that people would like to bring up on the call for discussion on anything. Okay, in that case we can adjourn and we'll jump over to the SDK call in about a minute or two. All right. Everybody will talk again next week and please do review the open PRs for the draft doc for the draft specs. Thank you everybody. Thank you. All right. Let's jump over to a mark you're still there right. Yes, I am there. We're waiting there's a PR. I wanted to get your take on it. If you get a chance, can you take a look at this one. Luke wants to have the make file pull down a binary. From some website, and I'm sure it's probably perfectly safe. I just personally get really, really nervous to pull down an executable during a make step. I mean, I realize it's going to happen. And if something obnoxious gets into that binary, it could do some nasty things to people. Yeah, it's yeah. I could just be paranoid. That's why I wanted to get your take on it when you get a chance. All the random binaries on the internet are safe. They works. Okay. The correct way to do this is to check for the existence of that binary and run it if it's available and leave it up to individuals to install either the binary or from source as they like. But it should issue. It should issue a warning saying the binary is not available. Can't do can't do link checking. Do you want me to comment back on that? I would let it if you could. Yeah, just to get someone else's take on it and I like that idea a lot. Thank you. Okay. So now SDK call. All right. What topics do you want to discuss? Code signing. Code signing. So we have a real problem that needs to be discussed here because I think we need to have a solution. And I don't know what that solution looks like. Dot net requires code to be. So there are two levels of code signing. There is a strong name, which is effectively binding the assembly to the owner through a private key, which is sort of a code signing thing. But that's just a public, public private key pair mechanism. And then there is obviously code signing outright. So it's an authentic code. Both of those mechanisms are fairly popular in the net space. And the strong name mechanism is something that is actually required for many runtime environments. And if you don't have a strong name, then the code is not going to be executed. So that is enormously picky when it comes to that from a security perspective. The problem is, I don't know that I can just go and make up a key, a key pair, because the code is not ours. And it's not clear that Microsoft would be authorized to even be in possession of that key. And that's true for that's true for a certificate for for a proper code setting certificate as well as for those private private keys. And it would have to be neutral infrastructure that is CNCF owned and CNCF run that allows us to do the code signing. And we have so we have we have brought this up kind of through our channel with CNCF folks, but I'm not sure we have a we have a proper solution for that yet. So ultimately, ultimately the goal is, I'm currently so we're currently building the assemblies out of this repo through the pipeline I set up. And ultimately the key that's being used to sign would have to be managed by the CNCF theoretically I can't manage it. So we're talking just about the case where we want to be able to make the binaries downloadable someplace as opposed to asking people to build themselves right. Yeah, because they need to be so for for the, for the, the bits be usable. They need to be in for job and a to be maven and for mostly. It's not meant to need to be new gaps, which means they need to be in available in the packet package managers otherwise. That's just not working with most people's workflows. Would we need to make sure that the build is done by the CNCF as well or or disperse or one of us lot to actually do a build. I'm worried what I'm worried about is what I'm worried about is really the protection of the private key. I get that, but I'm just wondering, I understand the code signing. I think I understand the code signing problem that you're describing, but I'm also just wondering that, you know, to say it's signed is one thing. But if don't we need to verify that the person doing the build actually build what they said they were supposed to build as opposed to some virus. Yeah, so exactly. So that's that's the, so that that is now that is not the interesting problem and I don't know. I have no insight and I don't know how the rest of the, the, how the rest of the world and CSCF is doing that is doing this with SDKs. But, but if we have binary. So we can punt and we can say, well, we're not going to have binary distributions. And say you guys go and sign your own thing but then we're obviously having the problem that everybody's going to build their own binaries and ultimately someone's going to go and sign them. So which means that that for for us to be able to put something into into new get we have to and to be able to use it in production for Microsoft. And it simply needs to be codes of needs to be signed and then it would be the, it would have to be Microsoft dots, a Microsoft dot namespace. And then it's, you know, it's no longer CNCF so there's there's a there's a there's the concerns of technical but it's really one of ownership. Since it's owned by the CNCF, there would have to be some infrastructure that is owned and managed by CNCF that protects the private key, and that probably also executes the executes the build. What you're discussing, what you're discussing is likely being solved for Kubernetes and other distributions. So we should figure out what their best practices are around that. Isn't code signed Apache to so you can have a local mirror and sign that copy. Well, ultimately, ultimately, we need to have one, we need to have one assembly or one, you know, jar file in in Maven and a new get and the package managers and we need to have ultimate if we have a Java JavaScript API needs to be an npm. And that needs to be had needs to have a signature. Right. And so that needs to come from one source the question is what is that what where's that one source. So the CNCF has a CI CD thing environment where they're going to call it. Have you reached out to them to find out whether they're able to use that to the build and I'm sure they must be able to store private keys for this type of thing in their system. But we're we're also I'm bringing this up because that's an active discussion that we're we're we're happy or we are happy. If you if any of you have have ideas about this, but you don't. Well other than try to leverage their CSED system now I don't have anything. Yes, so so that system is is probably what we need we need to learn about what that is, and how that works and how we can make that thing. You know pop out and then publish packets, because ultimately, ultimately, I would want to, I would want to go and tag your release. And then that CSED pipeline needs to run and we need to start running go and create a signed package and then go upload the package to the package manager, but I want nothing to do with the secrets and the credentials are being used to go and do that upload. Scott, are you going to have a similar problem with the go SDK, or is everything here you're producing simply vended in. Yeah, it's all vended in there's no binary. Like when you do a go get it doesn't build locally. Yeah, I would imagine Java might have the same problem though because they're going to produce jars. So can we assume that you're taking the lead and trying to figure out the solution for net that we can just replicate it for Java. Well, yeah. This might be the first instance of a CNCF project that's trying to produce code that's not directly owned by some sort of company or group that's trying to produce a product. I'm trying to think can you download. You can, well you can download things like coup control, but I think Google just takes responsibility for building that right. You know, it's it's moved off into the, the Kubernetes group, they have their own CI CD pipelines. Interesting. But yeah, but they should also be on. It should also be on neutral ground like it should be should be the same thing. It is but essentially, the Kubernetes group has formed into like a meta company. But that's still under, but that's still under Google though isn't it. No, Google gave up the, the whole pipeline and they donated I think like $5 million to run it and they're just burning through that cash until they're done. Interesting. Interesting. Okay. Surgeoning your hands up. Yeah, I just want to say that. Not sure about dot net but for Java at least. It's a common thing as well to use been trade and sign artifacts with been trace key, which is a third party key, but at least it's a very well known repository of Java artifacts. And then you can sync to moment central through being trade, which can be another option. Okay. Yeah, I mean, we, the, for me, the question that for me, the question is, is really how do we, how do we make sure that customers understand that this is the genuine SCNCF. SCK bits. Because that's ultimately, that's ultimately, so, so having anything, anything that is not, not really like the SCNCF. I think he would be a little odd. Maybe I'm taking a very unique perspective. But that's, that's how that looks, that's how that looks in our world right it's like the company that is owning the name space is ultimately the one that is giving you the genuine binary. I'm going to, I'm, I have this on my, on my own work list and I'll, I'll work with our folks to kind of figure out how to, how to get there for the net and also for Java. But it's, but for us, it's, it's, it's a, it's a real blocker right now. Thank you, Clemens. Right. Any, oh, next topic. Me on this, so I was asked to see if we can create a mailing list for the STK, and I did. Here's the URL to it. You can subscribe if you'd like. I think I still need to mention this probably in the individual read me's for each of the STK's I think I mentioned it in the, in the main spec repo, but not each individual STK so I think it's still an action item I need to follow. I don't want to make sure you guys are aware of it. We do have an SDK mailing list if you want to subscribe. All right. And unless you add this, but TCK is the next item on the list. I added it. And if I may, I can quickly explain what I meant by DCK. So when I was working with cloud events, Java SDK, I noticed that there are some things that I implemented incorrectly. Kafka extension headers, they were missing CA prefix, which means CA prefix. And it was there, it was released, but nobody have noticed. And SDKs are writing the tests themselves in their own code base. But it means that they may get it wrong as well. So I was thinking what if cloud events specification will provide some sort of some sort of a TCK technology compatibility key to verify that SDKs are working correctly. And it can be, I haven't thought enough yet how, how can it be decided, but at least I see that for popular transports like Kafka, for example, we could have, let's see, pre-populated Docker image with some cloud events in some Kafka topic. And then we read them and we assert and compare to some golden values. Scott, did you want to talk about your conformancy books? I think that falls into the same general space, right? Yeah, yeah. So this is a conformance test suite that I've been trying to write that's based on reading events composed in YAML and then sending them out to some, some transport right now. It's just HTTP. And the thinking was that you would have to bridge to your particular protocol and then consume those events on the other side and then turn it back into HTTP and send it. So there's, there's not really a full, like, this is how to make a test, let the bones are here. I'd love help. It's written in Go. It doesn't use any SDK. Let's go. I didn't know about it. I'll definitely take a look. The send functionality is fairly useful. I use it a lot. And also the listen is fairly useful too. So it's like, if you just, you know, often people are trying to set up these like funky curls and it's a little cumbersome to make a curl call. That's cloud events formatted correctly. So cloud events send that does that for you. That's nice. Nice. Yeah, I would definitely check and maybe try to integrate. Not a lot of time code spent in this repo, but if we can make it better, that'd be great. And another possibility I consider it was to use Oracle's graduate, which is basically virtual machine for different languages. So that we can reuse the tests in Java, in Golang, in Python, in JavaScript and a few other languages. So that we write them once with GradVM. And then we test SDKs with language specific constructions. But the test remains the same. It's like an integration test only. Yes. So if a CLI or, you know, you could interact with it directly in Go. Or you could get the binary built and then use it there. Yeah, take a look and let me know. Thank you. Thank you. And Mike, you added something down there. Yeah, so I created a, sorry. Making sure I was off mute. I created a CloudEvents SDK in Elixir. It has both send and receive functionality working for HTTP. I had a couple of things to do and it turns out to be a compliant SDK in terms of like getting better examples and making sure we have full test coverage, but I wanted to, I don't see if there's any initial feedback or advice before I go through that last 80% of the work. Just a silly question. What is Elixir? I had no idea. Elixir is a functional programming language that runs on the Erlang VM. It's kind of a cross between Ruby and Erlang. Got it. Okay, thank you. Knowing that Scott has that conformance repo, that might help me a lot. I've actually been using the Go SDK to test my Elixir SDK. Yeah, that's valid too, but the conformance tool is neutral. It's written Go and it's using things, but it's really just, it's very dumb. It doesn't use any other code. It doesn't mean it's dumb. Well, I mean, I mean, like you could inject other headers into the outbound requests. Yeah, the idea, the whole idea is that you string the test in and out using YAML if you like the YAML. Yeah. So what's the process of getting a new SDK merged into the Cloud Events org, I guess is the larger question I have. We have a very, very high bar and that's what you just have to ask. Okay. And this someone objects, but as long as there's a good sense that the person's going to be there to support it, not just dump and run, pretty much anybody can get in. Yeah. Okay. Yeah, I'll go ahead and clean this up and make sure it's conformant and then come back next, I guess next week to the larger group to ask that question. Sure. Sounds good to me. Anybody have any questions? If you haven't tried Elixir, it's really fun. Any languages? All right. Anything else you guys want to talk about? Go in once. I have a quick question. Who is currently working on the Java SDK? I believe that would be Fabio, but let me just double check. At least according to GitHub, it's definitely Fabio, but I wanted to check whether he's just maintaining it or actually working. I believe he's actually working it, but he does get distracted often. He tends to come and go, but I definitely think he is the person to talk to if you need to talk to somebody. Okay, got it. All right. Anything else? All right. We're at the top of the hour. Perfect timing. All right. Thank you, everybody. We'll talk again next time. Bye. Bye. Be safe. Thank you. Bye-bye. Everybody be safe. Yes. Bye.