 Hey Christian. Morning Doug. How's it going? Pretty good. A bit of a headache today, but yeah, I'm doing all right. Hopefully just a headache and not like a onset of the coronavirus. Flying the wall today. Okay, sounds good. We'll be immune until I step until it starts up. Okay, glad. Glad you're there. Oh, hey. Hey, hello. Hmm. Okay. Thanks for the warning. Hi, Eric. Good morning. How are you doing? I'm good in yourself. Pretty good. And yeah, and Clemens. Hello. Hello. You sound cheery today. I am because now everybody at Microsoft is also working remotely. Yeah, I've noticed that now I'll feel the only all feel my pain or that's a wonderful thing. Yes, I've had many conversations with companies who tend to run or run own being control of what you want to call it particular open source projects. And they're all located in one particular area. And I keep telling them, have everybody work from home for a while and experience what it's like to not be in the inner circle or be hanging around the water cooler to be able to have those those chats and hear what's going on. It's not fun. So I understand your pain. So as grave as the situation is certainly up there in the Pacific Northwest, as happy I am of the consequences of the now being in quarantine effectively. Yeah. Yep. Hey, Tommy. Hey, Christian. I'm Christoph. Sorry. All right. Yeah, but your prediction was right, though. You kind of possibly, I think you kind of predicted that they're going to cancel. So there you go. Plus poem, but you know, to keep you out of you in July, August, that's tough. Yeah. Yeah. Because by now, everybody will have to book vacation and unless, you know, if, if, if people go on vacation, then they would go to like, like those two things will always be in conflict. Yep. Yep. It's going to be interesting to see how that plays out. Hey, Mike. Good morning. Morning. And Heinz. Yes, I'm here. Thank you. Hello. And Klaus. Yes. Hello. Hello. Did I get everybody? Ginger morning. Good morning, Doug. How you doing? Good. How are you? I'm good. Thanks. All right. Everybody's nice and early today. It actually might be a short call because short agenda, just a couple of PRs for you. Actually, while you guys are waiting, since we usually don't start till three after, if you guys want to take a look at the, in particular the first PR in the list, the sending and coding of Kafka headers to see what you guys think in there, because there was a must that was out of there. And I don't think it's a big deal, because I think that was the intent all along. But I wouldn't mind getting some of the feedback from people who have touched the Kafka stuff in the past. Morning, Doug. This is Colin. Oh, hey, Colin. Welcome. All right. Prené, are you there? Yeah. Hello. Is this your first time I call it? I apologize if it's not. Can you do me a favor and paste a link in the chat. Here's a link to the meeting notes. Either put your company name next to your name in the meeting notes, or just type into the chat and I'll add it for you. Just want to make sure you're associated with the right company. Assuming you want to be associated with the company. And then, Prené, are you there? Yes, I'm here. I apologize for you too. Is this your first time in? No, this is my second time. Okay, then I sure already have you. Never mind then. Good. No worries, but thank you. You almost got my spelling right. It's actually B-A-N. Oh, close. No, no, B-E-N-K. I'm sorry. I can fix it. No worries. It's okay. I can do that. I got it. Thank you. I have a whole minute before the meeting starts, so I have time. All right. Mr. Jeffery there. Jeff, Colin. Oh, good. Thanks. Let's see. Did I miss anybody? Who is N-G-I-R-A-L-B-O? I don't know if that's the full name or if it's a combination of two names squished into one. Oh, yes. That's Nick Lopez, Gerardo. I'm a new member. Cool. Can you do me a favor? And in the WebEx, I'm sorry, in the Zoom chat, can you just put your full spelling of your last name and the company you're with just so I can get it right for the attendance? Yes. Cool. Thank you. Appreciate that. All right. And it's three after, so why don't we go ahead and get started? Got a question for you guys. So a long time ago, when we were talking about a new icon for cloud events, I believe Austin took an action item to change the tagline on the sticker and on the icon. I honestly can't remember why people didn't like it, but I figure it's been no long enough. No one's complained about it. Does anybody have any problem with just closing out this AI? I can't imagine. It really matters that much at this point. Any objection? Okay. Not hearing any. I'll just close that one out. Just trying to clean up the backlog. All right. Community time. Any questions from the community about topics that are not on the agenda? All right. Moving on then. So KubeCon, in case you have not heard, it has been postponed until June and July, so we don't need to worry about the planning at this point in time. So it gives us a break for presentation creation and stuff like that. As things, that does include these to day zero events like the service practitioner summit as well. If anything does pop up, I'll try to remember to mention it to you guys, but obviously you'll probably get the emails just the same as me. All right, SDK. I do have one update there. If anybody like me unregistered because your company prohibited international travel, they cannot re-register from you. From that, you will have to go in and register again. Thank you for that information. I did wonder about that. That's interesting. Okay. Thank you, Mike. I guess I should have anybody else have any other information about KubeCon that people might want to know about. All right. Cool. SDK call. I can't. I think it may have had a call last week, but it was relatively short. We do have another call planned for today. I believe because Scott had something he wanted to bring up today. And also, there is another PR that I just opened today that I wanted to... Whoops. I did that again. There's another PR just for the SDK folks to consider. It's down here on the list. So I do want to bring that up today. So anyway, short version is we will have an SDK call right after this one. It will probably start early just to warn it for people. A quick update for that. The goal in SDK has... We've been working towards version two. Version two drops support for cloud events spec v01 and o02. Okay. Cool. Thank you, Scott. Any questions for Scott? Cool. All right. Thank you again, Scott. I can't... It's not on the call. Is there anybody else from the workflow subgroup that wants to mention anything? Okay. One thing I will mention is I know that they are in the process of putting together their proposal to go to the CNCF TOC to be proposed as a sandbox project. I believe there's a pull request in the TOC's GitHub repo. If you want to take a look at that, I don't think it'd be surprising to anybody in this group who's been kind of following what they're doing. But if you do have any comments, obviously, please go over there and comment on it or comment on it in the workflow Slack channel. They watch that as well. Okay. All right. Let's get to the fun stuff. Discovery APIs. So, Mike, you're up first. Do you want to look at the Google doc or do you want to look at your PR? I think that the PR is more up to date at this point. Okay. There you go. Anything in particular you want me to scroll to or just talk to it? So, yes, I've been trying to actually put this together and rationalize about it. There was one thing in particular that stuck out is the grouping and the fan out. So, the problem is right word. But if you look at the way source is defined in the cloud event spec, source can be a pretty highly variable thing depending on the event's provider. So, it seems like it would be fairly common for, say, a blob storage provider to have each directory or bucket available as a separate source. So, I bumped that up to an array in the return. You probably scroll down a bit and see. Yeah, let's look at the structure of, yeah. So, I added a producer as a first order thing, which would be a human readable string. The idea here is thinking about UX around discovery. You can imagine somebody building a CLI or a UI where, if you look at some of the notes from the Google doc from several weeks ago, I talked about this idea of a discovery funnel. So, how do users actually come in and discover events? And I think there are sort of like two avenues we think about, as one is like, I know what type of event I want to discover, maybe because somebody told me directly. Or I know the service from which I want to discover events and being able to first narrow in by the producer. And then for each type that they produce, collapsing down to an array of sources, this allows a provider to sort of pre-populate this. I'm happy. I know people are familiar with the Google Cloud Functions UI, but it has a really nice flow here, where it's, you know, I pick what service I want to be triggered by Google Cloud Pub, and then I get a dropdown for all of the topics that I could be triggered by. And that allows that to be dependent on who's logged in. What resources am I allowed to see? What could I subscribe to? So that's sort of the biggest change, I think, in here. And perhaps something that would be controversial, perhaps not. The new OpenShift UI is super slick on this regard. It does the same thing, but kind of in a graph format. And it's awesome. Okay. Is there any high cells you want me to scroll to, or do you want to highlight, Mike? Oh, we're on sand just up, sorry. One other kind of, oh, let's go ahead, Ryan. Yeah, sorry. Remind me, what was this before? Was this just a string? The source? Yes. Yes, it was just a string. Gotcha. Okay. Yeah, that makes sense to me. Another thing that came up when we did an internal, you know, Google review on this yesterday is the concept of also having source as one of the query parameters, the usefulness of that, because of this way it's used in the return. Any other questions, comments? Because it's like most people are always saying you tend to look it over. Yeah. One thing that I think would be useful is if you showed a sample output from the queries. Yeah, that's on my, I think I just got it to do at the bottom. Or no, I took that out because it didn't, it Travis CI failed. Because I had an undone example. Yeah, I'll get that in. I kind of wanted to see if anybody had like violent reactions to this before I started constructing examples. Clemens, did you want to say something? You came up with it there. Or are you just preparing yourself? Sorry. Any other questions for Mike? Okay. Seems to me it's heading in the right direction. So that's all goodness. But to be honest, I haven't had a chance, I only had a chance to look at it this morning. For not having much time to look at it, you had a lot of comments. I apologize. Most of them I think are relatively minor. But that last one I did, I just, I'm having to wrap my head around it. But like I said, it could all be me. Did what I say about source make sense there? Kind of. I think I'm still stuck in this mental model of, I feel like there's like a list of static, there's like a static query. And then there's a more dynamic query. And I have this mental model in my head. And I can't figure out whether I'm right and I need to convince you of it, or I'm just flat out wrong and you need to convince me of it. I don't know. So yeah, source, I think is the more dynamic thing. So like the provider and event types is more static. The one thing I kind of went back and forth on is like, should, should we have two query options? One, which is basically give me the static thing. Don't expand the sources versus give me that dynamic thing. Do expand the sources? What about subject? Because it seems like this example, TN1222 slash alerts, that should be the subject and sensors is the source. I don't disagree with you, Scott, but that's not how source is defined in the spec currently. And this is in the CE spec. So I was trying to lean into that. I think if I think it is, I think there's examples of both. Hmm. Yeah, I think we need to have flexibility to allow both. Interesting. The subject is optional, but in the, the intention of it, when it was introduced was to do things like just showed to be able to fuzzy match on subject where source is static. It's interesting that you don't have subject in your list here. Do you remember missing it? We did that in the doc. I had proposed striking that. So Scott, are you suggesting that subject should be part of the list of attributes? I think so, because there are sources where like if you, if you wrote a GitHub one, for example, you could have the, the Oregon repo be the source and the subject be like the, a specific poll request. Right. So in that case, like this is simple. So if you look at one of the other things I added this source, what did I call it? If you scroll down a little bit Doug, this, I think it's the next attribute. Yeah. This source structure with something like a subject structure help there, like help, help you how to interpret what's in the subject field. This is an old argument, but I feel like we need to go back to there needs to, needs to be some sort of like way to assemble the source and the subject into a single identifier, like you're showing here, where you might have github.com organization repo is the source, poll, poll slash ID is the subject. You want to talk about a conical link to it. And it's, what are the pieces in each and how do you combine them to make a single discoverable resource? So like when you talk about specific cool IDs, that wouldn't be part of discovery because those aren't known in advance. I know, but the fuzzy matches like, but the shape will be is known. So the scenario that I can imagine where source matters in. So first of all, I think what we defined as a source is the origin of the event. And then the subject really describes something that is inside of that origin of the event. But if you are describing a discovery API, then you're mostly only describing the origin and where, you know, the associated, and you're really describing the associated subscription manager effectively, where you can go and get that event from. Now subject may matter, may matter still if you have some kind of a partitioning scheme, where you have some sort of events are coming from this subscription manager. And some events are coming from the other subscription manager, but they all are describing effectively the same overarching source. Exactly, because the root source would continue to be github.com organization repo. But then the list of subjects could be poll ID, issue ID, comment ID. Yeah, exactly. But those would be, I think, different entries and like different discovery docs, each different entities because pivoting on the type for those. Well, that's true. But from a, so yes, there would be different entries. And every, if we're referring them to subscription manager endpoints, those would also likely point to different subscription manager endpoints. But if you query for them, they will all be from one source. That is the organization raising events. But they might be, and we're just obviously just, you know, speculating here and github may be not the best example for it, but they are from a greater source, a greater context. And then you would still query for the subject as a further discriminator. So you can. This is the thing I'm struggling with is the like, could you actually include, like, would you actually expand and materialize the various subjects in discovery? Um, because I don't, I don't think they exist up front. I don't think, I don't think you want to. No, so like, so that's why I asked the question about it's something like defining the subject structure and the way the source structure I have here, does that make more sense than having the raw subjects? If at all, I would, I would, I would generally, I would generally not make things so complicated. And we'll just go work with, with suffix prefix first. So Kristoff's hand is up. Kristoff, you want to ask a question? Yeah. So maybe it's, I didn't get it right. But from my understanding, or, or we discussed it. So I think when we have this GitHub thing, we have the organization, we have the repo, we have the pull request. And my understanding was that we put basically, including the PR, the pull request itself into resource, and then only the ID of the pull request itself, the particular pull request, then into the subject. Or am I wrong? So if you have that right. Because then I can do the discovery on the part, because otherwise it's completely unclear where the boundary is. Because if you have like a tree structure, it's a bit random at what point in the tree you set the boundary between subject and source. I just from a platform perspective, I can see there being a need for corner cases, like partitioning to do some pre-filtering, also to do some pre-filtering on the subject. But I think, I think that's really that is a corner case. That like the, the, the 95% case should be, you only look for source and you ignore the subject in, in the, on the discovery registration side. Does that help any, Mike? I would, I think we needed it to distinguish between the discovery and the registration. I totally see filtering on subject being part of the subscription side. I want to make sure that we have the right information in discovery so that I know how to make that subscription. I think GitHub is probably a poor example for that, because like we hear we talk about subject as being ideas of things, which again are not known in advance, but like a thing about like a blob storage kind of thing. A start at JPEG subject filter makes a lot of sense. Yeah, I was going to bring that up. Like we understand them, what the bucket name is, but we don't know what's in that bucket. So it seems like we're kind of falling into a pattern where subject tends to be undiscoverable at run, at, at install time, not only at runtime. Ryan, your hands up. Yeah, I just wonder, is this just a question of how ephemeral these objects that we're hanging off of are? So like there's lots of examples within Pylio that can fall in either bucket, right? So for example, an SMS message that's sent through Pylio or any other provider is ephemeral in terms of the lifecycle of when it's active, right? And so if you wanted to subscribe to the events from a particular SMS message, it's going to be really hard to discover it because it happened so quickly. But an account level event, such as like a user logging in or the account metadata changing, whatever it is, that's more persistent and long lived. So I just wonder like if we're trying to define this in terms of how ephemeral it is. I think it's a really interesting point. The question I would ask is like, is there a user scenario for subscribing to events for a single SMS message or would I be subscribing to all SMS in this region or for this phone number in that case? Yeah, that's a good question. It is something that we do support today because the way that we do this today is through webbooks when you make a REST API call through the API. You can specify a status callback which is basically analogous to how we're talking about cloud events that is completely informational. And you can do that on a per message basis. So it's definitely something that is done today, whether that is a minority use case or a majority use case, I can't speak to you. Okay, class your hands up. Yes, so maybe another example. So in our products, usually a subject is what we call a business object. So sales order or maybe a customer, billing document, you name it. And we have different ways to identify those objects and they might also exist. The same object might exist in multiple services or systems. And so for subjects, something like a description which kind of identifier is used as a subject for that event would be helpful, for example, not the specific value but just what kind of identifier scheme is used. Any other questions or comments? Okay, Mike, anything else you wanted to bring up? Obviously the next step here is to keep looking at the PR and commenting like crazy. Yeah, thanks everyone for those examples and input. A couple of cases I hadn't considered. I, it sounds like there's a little potentialism consensus about not trying to expand subjects in discovery. Yeah, okay. Would it make sense to add that like, here's how you take an event payload and convert it into a resource URL using the source and subject combo? Like it was one of the things I've struggled with on. What do you do with those things to get to the actual resource? Isn't that going to be provider dependent? Like how prescriptive do we want to be about that? Well, this is a discovery API, right? Oh, for discovery, I see. Yeah. How do you discover how to go and access the thing you just got? Well, yeah, but I don't want to make assumptions that it's always an assembly of source and subject to get to the thing you just got because we do have data ref, we do have the ability to pass data and attributes in the event itself. It's not like, it's not always a go back to the producer to ask for a specific resource kind of scenario. Yeah, yeah. The cloud event spec doesn't help in this part. Like once you do get the poll request event, if it's split up into GitHub or repo poll, and then there's an ID that you can't really like show a link to that thing deterministically. You can't, but like that could be part of the data payload of the event itself if the provider really cared to send that information across. That's true. Okay, anything else? All right, cool. Thank you, Mike. And let's go back to this doc. I assume Clemens, to turn and let me scroll down to your section. Yeah, we have started to do the, or I have started. We had a call and mostly what we decided was to also start doing that document. I've done a bunch of work on the first three pages of that restructuring some of the the terminologies. So this is now outdated effectively, but we haven't shared the document yet because we wanted to review that in the group first and then shares. So we're going to do that next week. But I found, hopefully for Heinz, I found a palatable replacement for push and pull. We're still going to use the terminology pull style and push style, but we're going to augment that with some extra explanation. And that is who's initiating the delivery. And once that's a consumer initiating it, and once that's the subscription manager initiating that, and so that's going to be hopefully helping to clarify this. And so, but the subscription spec, the part of the subscription documents ended up being much longer than I thought. So this is about me not being able to get the homework done between Tuesday and today. And I'm sorry about that. Okay. I think this filter dialogue section might be relatively new, isn't it? Yeah. The filter dialogue section is new. That's right. Anything you want to comment on that one or just point to it? We'll discuss it. We'll discuss it when we have the document transposed. Okay. Any other already comments or questions for Clemens? Okay. In that case, thank Clemens and everybody else in the group. I'm looking forward to the PR. I got one question for people. Let me just double check here. So, Mike, you created it as a top-level document, which obviously makes sense as of right now. At some point, as long as we keep these specs in the same repo, we are probably going to want to think about restructuring the repo itself to have directories and stuff. Just wanted to check with people. Does everybody still think that as of right now, we should keep it in the same repo or should we start thinking about a separate repo for these other specs? If we're going to put them out to a different repo, I wouldn't want to restructure them on restructure. Anybody have any thoughts on that? Just popped up on today. I'd like to call in one place. I tend to as well, but I want to make sure no one had any second thoughts. Yeah, I think all in one place. I figured if you wanted me to put this in a directory, that was an easy change later. Yeah, yeah. No, no. I'll go off and think about possible restructure of everything, and then we can figure out where you guys stuff go later. Okay. In that case, last chance. Any questions, comments about the overall doc, whether it's Clement's section or just everything in general? All right. Cool. Thank you. In that case, let's go to PRs. Okay. So there was a question about... Let's see if I can even remember this thing. Hold on a second. Here's the original issue. It was a question about binary mode and the attributes in Kafka, and whether they're strings or not. I'll let you guys read that for a second, or take a second to read that. And here's the PR for it. Give you guys a second to look at that. The one part that worries me is that, as well as the other one down here. So I know I'm starting to come up to speed, but I feel like I still know next to nothing about Kafka. So I need to come on you guys. But this seemed okay to me. But for anybody who actually played with Kafka a lot, do you guys have any opinion on this? Is this something that we need to actually specify, or was it okay to before? Clemens, you're coming off mute. I believe that for most usage, that's a no op because you're typically going to be using some SDK and the SDK does that already. So the Kafka SDK. But I think that is correct. I'll be happy to go and take a look at that as the reviewer and go and cross check that against the protocol aspect. But I think that's right. Okay, I appreciate that. Like I said, I think the must is consistent with what we intended. And that's why it's not normative. It's not really a normative change, even though technically it is, because adding a must is a breaking change. I think that's mostly clarification because otherwise things will not work, because the wire protocol requires it to be UTFA. I'll go and verify that and make a note in the comments. Okay, I appreciate that. Thank you. Go ahead. If you can add something. In looking at the Kafka message specification itself, the adders are, it's a map of byte array key and byte array value. So technically in the protocol itself, it's a byte array. That's why I think we need a clarification. Yeah. And my concern was just that that was the intention the entire time. We just forgot to add that text. So I think the question is what is the, if you are putting a header in using the regular Kafka JVM client, does that store it as UTFA? I think that it does. Uh, I, if I recall correctly, the, when you set another, you, the adders inside the JVM client are represented as a list, not as a map, and the list as a record adder. I think it's the name of the object. And it's a key byte array and value byte array. So it's just byte arrays. Yeah, I think maybe I should check, but the protocol currently, that's how it works. And, and, and for example, this, this basically came out to when I was implementing the Kafka binding for the cloud events go SDK. So because the Sarama for, which is the client, the Kafka client for go gives you an array, an array with key byte array key and byte array and byte array value. But you did the implementation for go already? Yes. So what did you, did you, did you do the UTFA mapping that? Yes. Okay. Yeah, I'm, so again, I'm leaning towards that being right because that's, that's the, the best, the best encodes, you know, mapping from strings to a byte array. Um, so I have no objection to having that, that rule. And I don't think if we add this is going to break anything. Yeah. It also follows what it's already done in the, what's naming that in the cloud events, Java Client. Okay. Well, then, then, then that's probably what we should do. Yeah. So I'm, I have no objection. Sorry guys, I'm here. Let's give everybody a week to think about it. Because I don't want to rush it. And in the most is the thing that worries me. So I want everybody to take a look at it. Or if you know about the stuff, take a look at it. And if you have any concerns, comment before next week's call. Sound fair? All right. Thank you, everybody. All right. This PR is mine. I can't remember exactly where it was, but somewhere somebody had a question about how to discover whether an incoming message is actually a cloud event. And obviously when it is a structured cloud event, it's easy because the mime type tells you that or the content type header tells you that. However, it wasn't clear what to do if it's a binary message. And how do you know whether it's a cloud event then? And the spec never actually says. So what I thought about doing was adding some text along these lines here, which basically says if it's binary, and the four required headers are present, then it's a cloud event. Now that you can't be 100% guaranteed of that, because we can't stop somebody from just randomly using our headers, but not claiming that they're a compliant cloud event spec for some reason that just could be borrowing ours. So with that of our domain to actually mandate something, like don't use our headers unless you're a cloud event spec, we can't say that. So that's why I say things like it would be reasonable for a receiver to assume that. I think that's the best kind of guidance we can provide. So if you guys get a chance, please take a look at this, see if the text sounds right. If you don't like the entire idea at all, obviously you can say that as well. But if you're okay with this general direction for the text, then I'll make similar changes to the other protocols, but I wanted to start with the HP one to see how people thought about it in general. And Francisco, it's your question. I'm going to take, maybe I need to clarify this. I'm going to say mandatory fields. I'm talking about the mandatory cloud event attributes. So there and there's four of them. Spec version, what is it? Source type, and one of them I can't remember what it was. Those are the ones I mean by required. Any questions on this one? Yes, that's what I mean. Yeah, I'm sorry. I want to, yeah, I broke the problem. Yeah. I mean, in general, does this direction kind of sound right to people? That's what the SDK, where the go SDK does. It says check the mind type, or check the media type. If that's something that is not special, then check the, if there's a version. And if there's a version, then try to start parsing it. Okay, that's good. It's funny, originally the texting here only talked about the spec version field. And then I couldn't sleep last night, so I started thinking about this for some odd reason. And I realized that technically, it shouldn't just be that one field that actually should be all four, since they're all required fields. And that way, you can say, well, if you don't have all four and you only have three of them, well, then you're not compliant to the spec. Therefore, you're not a cloud event. And that was my original thinking. But if you guys, for some reason think, spec version alone should be sufficient, speak up at some point. The problem with requiring all the four mandatory fields, so all mandatory attributes, is that how do you distinguish between malformed by the cloud event and the unknown cloud event? I think that the go SDK would do is, it would give you an event, but if you tried to validate that event, it would say it was nil, if it finds a version. Yes, that's how it works in SDK Go. But in SDK Go, the distinction is done just on the spec version field, on the spec version attribute, not on the others. Right, but I think from a spec perspective, if you don't have all four, not a valid cloud event. Sorry, to clarify, the parsing requires just the version. The validity of the event requires the mentor fields. Right, so Scott, do you think the text in here is correct from a spec perspective to say you have to have all four for it to even be thought of as a cloud event? I think to do just parsing, you need to inspect content type and look for version. Version tells you how to parse the rest. I think parsing is different than, is it a valid event? Right, that's what I was trying to get to. Okay, well, obviously I'm always open to wording changes, so if you think the wording here needs to change, let me know. But it doesn't sound like a big discrease with the general direction, it's more a question of four versus one attribute and maybe some wording, but you don't have to figure out it right now on the call. Okay. Okay, but if you do think you're going to have some comments on there, I would appreciate comments sooner rather than later, because as I said, I would like to make similar changes to the other transport specs. I thought about possibly putting this into the main spec itself, but I thought the text might be different or might be transport specific, that's why I stuck it in here, but if you guys think it might be better suited in the main spec itself, I can try to fit it in there somehow, I just couldn't think of the right wording. So anyway, think about it. Okay, you go ahead Scott. This is exactly why the Golang SDK is dropping support for dot one and dot two. Because we kind of flip flopped on the name of the version. So parsing the incoming request was difficult. Good point. Thank you. Okay, moving forward then, this one, I'm trying to remember why I stick this in the agenda. I think I stuck this one here because I wasn't sure what's happening with it. Does anybody remember, Klaus and Scott, you guys were talking on this one. Do you remember what the resolution of this one was? Very fresh remembering, it's this one. I thought we decided that it was, the example is an invalid cloud event. The example that he provided here, you mean? What do you mean the example on the spec? Oh, okay, that is from the spec. Yes, I was wrong. I guess what I'm trying to get to is, do we need to change the spec at all, or do you think that the issue itself is invalid? I think of the regular collection that we were saying that his interpretation of the spec was wrong, but I can't remember for sure. I mean, I'm sorry, I need to look at the issue. Okay, that's fine. Klaus, anything you want to say? I think it was related to the changes we did only shortly before 1.0, regarding the content type, the data content type, so that if you're using a structured encoding, then you have the default here for the application JSON. So if you use a JSON encoding, and you don't have to specify it, and it's assumed that data is a JSON value. So I think it mainly was a misunderstanding around this. Yeah, I think that's right. The end result is that these are both valid examples, but they're not equivalent, because the middle example there, it's a string of serialized JSON object as a JSON string. The third example is foobar object as a JSON object. So they have the same data, but they're not equivalent. Okay, so it sounds like we're saying this issue should probably be closed. I'll figure out some way to let them down easy. Okay, thank you guys. Last item on the agenda, Francesco, I think you just added this one. Did you want to talk to this one? Yeah, I think it's just maybe some bad word being, which I found while I was implementing the Kafka specification is that inside the Kafka binding specification, it states that there should be a KS function. While then I opened the link of the partitioning extension specification, and the name of the extension is partition key. So that's not clear. Just out of curiosity, assuming it is just a simple typo kind of thing, which one should it be? Just key or partition key? Anybody else disagree with that? Hi, sorry, it's Jen. What are we agreeing or disagreeing on? We're saying, Francesco was saying that in the Kafka spec, we use the word or we talk about an extension key, or an extension called key pointing to the partitioning extension spec. But in the partitioning extension spec, it's actually called partition key, not just key. So there's any consistency in the wording for the name of the attribute? Oh, I see. So probably that Kafka spec should just refer to the partitioning extension, and not make any comment about the key. That would work too. Yes? Well, the words, if you can open the spec. Okay. Yeah, this paragraph, key attribute, it was useful when I was implementing the spec. So I like this paragraph, but yeah, the name should be consistent. And it's actually this spot right here, right? Yeah, even in the name of the paragraph. Yeah, typo, I'd say. Yeah, you want to submit a poor request to fix that? That'd be cool. Thank you. Okay, so maybe write on the issue that we agreed on partition key. Yeah, what is today? Oops, actually it's not four, it's 23, isn't it? January, February, March, yeah. Oh, gosh, that's fine. Yeah, okay, here we go. Thank you, sir. Any other questions or comments on this? All right, last one. This is, I was going to bring this up in the SDK call, but if anybody has any questions or comments on this, I just want to bring it up quickly. There was a question in the SDK.md file that talks about what we expect of SDKs, and it basically says, you know, support ongoing change to the cloud event spec. Obviously, you know, try to keep up to it, but somebody asked a question about what that actually means, right? Does it mean every single version, every single old version, release candidates, point releases, major releases, whatever. So I tried to make it clear that it would be nice if everybody supported at least the latest in n minus one major release, and then for each major release, we're only going to ask them to at least support the very latest point release, and that release candidates are not required, but strongly encouraged. We don't want to talk about that now. I was going to say that for the SDK call, but if anybody on the main call has any questions or comments about that, I wanted to bring it to your attention to look good over. Okay, all right, in that case at the end of the agenda, any other topics people want to bring up? All right, last call for attendance. Doug, are you there? Oh, there you are, Doug. Did I miss anybody else for attendance? All right, in that case, we are done. Thank you guys. We'll talk again next week, and we'll start the SDK call in about two minutes. Thanks, everybody. Thank you. Goodbye. Bye. Thank you. Bye-bye. SDK call, can we stay here? Of course. Oh, you are here. Let's not talk about us. Yeah, same, same slacks or same Zoom channel. So you don't have to move. Everybody else moves except us. All right, why don't we go ahead and get started? Scott, you're first on the list. You're the list? Well, I do on the agenda. Whoa, what'd I do? I have you on the agenda, or did you already say everything you wanted to say about it? No, no. So, Sir Slinky and I are hacking away. Like, Sir Slinky, that's good. We're doing it. That's awesome. So, basically, we're looking at the work on bindings exposed a lot of performance improvements that can be done if you don't delegate all the way down to the conical event, which is something we want to promote in other projects. So, we're going to rework the SDK to make it easy to be able to be an integrator at that level. So, if you would like to be a function and consume events, it works for that. If you would like to be kind of middleware and receive half-parced message objects that aren't their events internally, but they're not all the way exploded out, that's possible. And then you can forward it on to the next transport hop. So, doing middleware fan out or filtering in fan out so you have access to the attributes. And then we're going to make it possible for somebody that would like to interact with their transport or their protocol directly, like the AWS use case, and they just want access to be able to marshal the event in and out of some JSON form, and that'll be possible. So, we're trying to service these kind of like three main users of this SDK. So, it's pick the format in which you want to see the data. It's basically you're in this sliding scale you're giving up or taking over control over the protocol. Interesting. So, at like a function consumer point, you don't really care how it got to you. You don't even care what protocol it came on. You just care that it's an event. At the middleware level, you really care about optimizing the usage of the protocol so that you can do things like understand how your partition keys in Kafka are working. And we don't have to like plumb all that data through the SDK because it's too cumbersome. So, the work here that Sir Slinky and I are working on is like trying to make this easier so that there can be an optimized usage of the SDK or it gets out of your way and you can do all the things you do today, but still gives you the ability to take your event and push it onto the wire, but not drop the use case of like make it super stupid easy where it's like 10 blocks of code to send an event. Sounds good to me. I always like the option of keeping it simple at first and having back doors to get to the more complicated stuff if you really need that level of control. Sounds cool. Yeah. So, that's what's coming. There's going to be a lot of breaking API changes and we don't care as we've got a 1.1. I like this move. Yeah. I mean, 1.1, I made a branch. We can continue that fork of the code, but in two, it's going to be a little more trimmed up. It's going to have a slightly different usage case. The API is going to be a little less cringy because it got a little feature creeped. So, yeah, that's what's going on. Cool. And this is just an FYI thing. You're not actually asking for input from the group or anything. This is just to let everybody know, right? Yeah, yeah. This is what's happening. If you would like to come hack and help, we're working on that full-time right now. Cool. Any questions for Scott or Sir Slinky? All right. Cool. Thank you. So, I went back and looked at some of the more recent old meetings to make sure I pulled out some of the action items because I kept feeling like we were forgetting to do some. And I came across one other side of the media that said, make sure we documented the leads for each SDK, which conceptually sounds good so people know who to poke out if they have specific questions. However, before I was going to go off and do that, I wanted to double-check and make sure that we really did want to do that because do people really want to get pinged directly about stuff? Or do we want to just add text to each readme that says, if you have a question about this, don't hesitate to open up an issue. That way, it's not pointed at one particular person. It's to the group itself. I have a terrible idea. What's that? What if we make up a list, like a work group list? So, if you would like to directly contact the owners of this, you send it to the CNCF Working Group SDK Go or C Sharp or Python, and in that list is the current owners and we can manage that list. So, how is that different than opening up an issue? Because once you get an email anyway, have you seen my GitHub? You get about 4,000 GitHub emails. Yeah, I know, we all do. But yeah, that's the biggest issue that you really can't see stuff. I see noise from lists because they tend to be written by humans, but automation from GitHub is, I'm sorry, but that is a broken tool. Okay, so, I have a feeling I'm going to get some resistance about creating a mailing list per SDK. One for all of SDKs I could probably convince them of, but I don't know about one per SDK. I think one for SDKs is fine. Okay, if you want, okay. Before I head down that path, what other people think? Do you want that? Do you want me to just put your names in there with emails? Or do you want me to look into the one SDK mailing list? Personally, I think that each SDK probably has its community, so everybody can handle it on its own. I mean, like for example, for Rust SDK, I could do a GitHub channel because maybe it's better. So we're trying to make a consistent method for all of the SDKs, for people that are coming to the cloud events. Well, as soon as the read me of the repo clearly states where to get support, which people somebody should talk, I don't see the reason for having a mailing list for each SDK. Maybe SDK go, which is a pretty good user base, should that one, and maybe other SDKs shouldn't have, I don't know. Yeah, this exactly, this action item started because we didn't know what SDK supported 1.0 and Doug had to contact everybody, and he had no idea who to contact. Yeah, I did figure that out. The way this SDK, the C Sharp SDK, were very well-handled and documented. The JavaScript SDK too, I think Fabio was working on that, rather public in it, but the rest of them were a bit... Unmaintained or less maintained to put it nicely? Well, Python SDK is unmaintained too. Yeah, we need to get another volunteer for Python because I think it's a fairly desired language for cloud events and it's fallen by the wayside. The Java SDK, it's actively maintained or not? I think it's fairly active. Because I can volunteer to work on it. Go right ahead. No one's going to stop you. Okay, can I break everything? Yes. Yeah, I know it. Well, I think you have to check with Fabio, right, because Fabio owns that one? Yeah. Okay, so I'm not quite sure where you guys want to go then. Do you want to go with me asking for a minimalist per SDK, one global minimalist, not have a minimalist? It just put names there. What do you guys want? I don't care. This is your guys' call. I have no strong preference either way. I mean, I already voted. I actually thought... So we were talking to a client and this was for Knative. The client, because they're a publicly... They have some government regulation and they can't talk to us using Slack because it's a too free form of a media and it's not audited. So the only way they can interact with us is through email. That's hilarious. So I thought that was interesting. And so if there's another entity that needs to interact with the SDK authors in that way, a mailing list that's directly for them probably makes sense. So it seems to me though that the amount of traffic we have right now is relatively small, even for something as popular as the Go SDK. So that's another reason why I'm a little nervous about one mailing list per SDK. What if we can start with one mailing list for all SDKs and then if one particular one gets busy, we can look to fork it to its own? Yeah, that's what I said. Yeah, that sounds fine. Anybody object to that? Fine. Okay. I can look into that. Okay, thank you guys. Thank you, Vlad. All right. All right. So last one. Oh, I already have it here. I forgot. So what do you guys think about this? It's just high level information for people looking to write an SDK. I don't know what a release candidate means in this context. You might have to elaborate a little bit. I struggled with that. Okay, I'll work on the wording of that. But does everything else seem okay? I mean, it's not normative or anything. It's just sort of high level guidance. So what does that mean that with major release, we only have one major release, which means we can go and drop all the support for all the previous versions, right? I would assume so, but what do you get this up to you guys? I mean, it looks like if the Go SDK drops it, then I will probably go and drop all those versions as well because that's going to be making things much quicker. So I like this. It gives clarity. Scott, did you drop all previous or just one or two? No, we're keeping .3. Okay. So .3 and the reason is .3 is very similar, not exactly the same, but similar to 1.0. Yeah. One is very divergent. So removing that removed a bunch of special casing. Yeah. Yeah, we have a 4C sharp since that's our home language at Microsoft, so to speak. And we have, so lots of our customers are using it and we have had customers who have been using cloud events with Event Grid where we supported 0.1 in the product and we didn't support anything in between, but then we support 1.0. I have to think about what the repercussions of cutting it are, but yeah, customers shouldn't go on. Okay. So it sounds like you guys are okay with this in general. I'll just work on the voting of the release candidate. Because this is effectively, this is what we commit to, not necessarily what the code does. For a new SDK, like let's say you, well, so it was like, for example, Rust is coming online. Should we require that they also support 0.3? Scott, for this, I can say that I want to support the .3 for the release info reason that we need to understand how to abstract the various specification versions. So we will have support for .3 only for this reason. I agree with that. I think things like the JavaScript SDK don't do quite the same level of introspection of the event. And I think you have to say like, I think this is a version two event. Go. And it tries. Scott, are you suggesting that we actually special case 0.3 in this list or it's a nice to have, but we shouldn't require it? No, no. Oh, sorry. It is good hygiene to be able to write your code in a way that handles multiple versions. Right. But I was more interested in whether it's okay that the Rust SDK does not support 0.3. Ideally, the muscle that you developed by supporting n minus one is very valuable for the n plus one. It's more work. Yes, but you're speaking abstractly. And I agree with you, but concretely, do we want to say 0.3 is a special case? And because n minus one doesn't exist, we're going to say you should support 0.3 as long as n equals 1.0. Yeah, maybe I think you should remove that word major and just say releases. Well, the problem was at that point, well, what is the release? Right. Is it 1.1 versus 1.0? I don't think so. Right. That's why I phrase it that way. And I have no problem adding a bullet here that says on by the way, for the case where n is 1.0, n minus one means 0.3. I'm okay with that if that's what you guys want. What if we call the zero releases major? I don't know. I'm a little nervous about this just because technically anything before 1.0 should be completely optional and toss away whatever you want to call it. It's not until you get to 1.0, things actually matter. Yeah, possibly, okay. How about this, you keep these words and you say it's recommended that you also support at least two versions. Okay, I can't think I can work with that. Today that implies that you have 0.3 support and in the future it might not. What do people think of that? Well, see what's interesting is you phrased it nicely abstractly that that's good. But realistically, we already say that here, right? And the only reason that you actually need that other sentence that you just said is because of the 1.0 situation. So it may be better from an understanding perspective to not be coy about it and just say, look, 1.0 is a special case. It is highly recommended that you also support 0.3. Yeah, sure. As a footnote or something? Yeah. Yeah, that's fine. Okay, anybody else have a problem with that? Okay, hold on. Maybe like an asterisk on major releases and then like by the way 1.0 doesn't have any more, doesn't have an n minus one release. So we're going to consider the 0.3 a major. Yep. Yeah, sounds good. Okay, I'll work on that. I'll let you guys review it. All right, thank you. Let's go right here. Whoops. Okay, anything else you guys want to talk about? Have you seen the conformance tools status? No, I was actually, I did add that to here. Let me go. Can you stop sharing? Yeah. Oh, even better. Where is it? Stop sharing. Go for it. This one maybe? Oh boy, here we go. Hold on. Ah, I think the right one. Okay. So I made a thing. It's called cloud events. You can install it. There's instructions. The idea is that you could have this like canned set of YAML that's encoded as the event and then send it out to some target, which is pretty interesting. But it also has a mode where you can say cloud events, listen, which, oh here, hold on. So I can say cloud events, listen, and then I can say cloud events, send, and we'll get the help for that. So we need, we need a couple here. We need a couple of things. Some nice person gave you an example. Connect, oh, because some idiot made a typo. So now I can send with cloud events and then over here, the cloud events conformance tool is also listening. And it doesn't do any like major data processing. It just dumps stuff out. And as you can see, it doesn't do it exactly right. And then maybe there's some bugs in here because it's very, very little code. You can also, the thing that's interesting is that this is a stream of YAML. It's written to the standard out. This is written to standard error. So you can pipe this, the output of listen to a file and then have a stream of events that it'll try to send for you in the next round if you so choose. So you can do cloud events invoke. And invoke takes a target and a file. And it'll, the file can be a directory and you can recurse that directory structure in case you want to do that. So that's the current state of this little cloud events tool. And it's, I think it's pretty helpful. I find myself using it a lot when I want to validate that what I'm doing is kosher. That looks really cool. I'm just trying to piece it together. Oh, sorry, I'll go back. Oh, so where does the conformance side of this come into play? Do you see this as being, yeah, how do you see this being part of the conformance work that we talked about? The conformance thought was that you could, you produce this file that you listen on and then you compare that to what you sent. And if those two things are the same, like by a diff, then you are sending what you are receiving or you're receiving what you're sending. At least you think you are. Okay, that helps. Thank you. All right. So the thought was that this is a double-ended thing and it would go, you know, invoke. And then some black box you're testing. And then the black box is supposed to send another HTTP request to this listen. And so invoke is taking in YAML, some set of YAML, and then listen produces some YAML. And at the end of the test, you can compare what the invoke YAML looked like and what the listen YAML got. And so I need to write a tool that helps you do that diff and see if it passed because there's some extra garbage that sticks in here for you. Actually, those are transport extensions. So you can ignore those if you need to because it's not part of the cloud event. Okay, sounds cool. Any questions? Any questions or comments for Scott? Scott, can we create a full-fledged TCK from this? Because that's something that I would love to have for the RastasDK. Because now we're starting, so together. I mean, it's a command line tool. So you can interact with it. Yeah, what I mean is that at some... Do you feel at some point we could provide scripts or ready-made YAML output YAMLs? So... Right, so let's see. I was attempting to do this in the conformance tool. I have a canned YAML directory. And so like if you go into v1, 0, you... I mean, so I kind of cheaped out, but you see what the minimum version of what I think a valid cloud event would look like at the minimum level. This is like testing that emojis work and all this stuff. And so you can actually... Actually, we'll try it now. Cloud events invoke HTTP80 and then minus F this file. And so it goes and reads that file, interprets it back into an object. It's very simple code. There's not a lot of logic here. And then the listen tool dumps out what it got. And so, you know, there's a couple things. Looks like it's not actually doing... This should be a... Actually, no, it's fine. It's fine. The conformance tool is not conformant. No, it's fine. It tries its best. It probably needs some love. I wrote it in like a weekend. Yeah, I got it. I got it. But the idea was that you have this set of conical YAML that you write, but it also helps you produce it. And then it helps you send it as a blob, but you can also poke at things that are running. And so as I'm testing with... It's actually really useful for the changes for the Go SDK because I know that this is an alternate implementation that mostly works. I'm wondering that if the receiver is able to validate the events and maybe echo them back... Well, that assumes that the black box you're testing has the ability to respond. And I didn't want to make that assumption because a real test would be this invoke to HTTP. So here is HTTP. And then this is some receive adapter that goes to AMQP, that goes to some subscriber that goes to send this listen thing. And then this results in YAML. All right, so the black box here, this black box would be the harness that you set up... Or sorry, this is the black box that this tool helps you use. And then you can stub out whatever you need in between the testing piece to be able to compare the input that invoke used and listen and what the listen command got. But anyway, I'm sure there's more to do here. And it's very simple, simple code. And I think I don't even support... I think I support like 1 and 3.3. But you can check it out. The cloud events slash conformance repo is where the tool is. Yep, that's going to ask you about the status of that. Okay, all right, cool. Thank you, Scott. Any other topics for today's agenda? Okay, so I'm assuming we're going to go back and meet in two weeks, right? And not next week. That's not right. Not hearing any objection. If something does pop up and you guys really want to have a phone call, you know, next week, let me know. But otherwise, I'll assume we'll meet again in two weeks. All right. Thank you guys. We'll talk again next time. Bye, everybody. Bye. Thank you. Bye.