 So let's talk about let's talk about that Doug now. It doesn't matter. Sorry. I can I'll debate you to the grave about who owns the event type name. Okay. Well, I actually I'd like to get Mark's opinion. Wait, wait. Cage match. Cage match. So let you guys have things specifically talk about the SDKs. I wouldn't mind talking about this and getting Mark's opinion on it. Okay. One. Oh, what so one thing with the SDK is that I've added tracing and analytics to the go SDK. So out of the box, you can, well, you opt in to expose those metrics using an exporter using open census. And so now you can do like if you want to expose it to like Prometheus, you could get metrics on how many times like you get an error response and things like that from all your end points. So that's pretty cool. And I'll have a cool K native demo of that. You can also expose it to like the the other analytic tracking things that like Google cloud uses. So we can do tracing all the way through your application and the the trace boundary doesn't stop at your code. It will go through in the SDK and come back out. That's cool. So it's going to be sick. That one's ready for review. Yeah, I think so. Yeah. All right. I, you know, again, I need to get back up and running because all my all my setup is broken, but, you know, I'll get there. Okay. So the to catch markup on this debate, Doug believes. I like that. Go ahead, Scott. Go ahead. Doug sits on a tower of misconception. There you go. Much better. Okay. So, so let's say we have the project that is going to expose cloud events from a an event producer that does not produce cloud events. Who the the reverse DNS name of the entity that emitted that event should be a GitHub. Well, wait, wait, wait, wait, wait, wait, wait, wait, no, you didn't phrase it quite right. What we're talking about here is not just just just not generically the reverse DNS name what we're talking about here is what goes in the event type field in the cloud event that the field name matters. What do you mean the field name is set by the cloud event spec? What I mean is the who who the fact that we're sticking this value into the type field of the cloud event is very important to this discussion is my point. Yeah, we created so there is a there's an adapter in Knative that takes in a generic event and produces a cloud event right and then and then sends that new cloud event on into the cluster. Knative took the opinion that we created that event because it's not from the originator and therefore it is our event and we need to it's reverse DNS with like dev.knative.eventing.github right now my opinion is that the the entity that is in essence wrapping this event from GitHub to turn it into a cloud event is more like a proxy to me it's a piece of middleware it's not really pertinent to the flow of this message other than it happens to be the one that that represents it and that the receiver of this event does not care that the entity that wrapped it in a cloud event was a Knative thing. What it wants to know is that this event type is a GitHub event type and so the fact that Knative appears in the field called type is actually a mistake well but we get to choose how much or how little of the GitHub event we pass along. I don't I don't necessarily disagree with that statement however like I said from an end user perspective I don't care that Knative was in the it was the piece of middleware that actually created the cloud event wrapper what I care about is that it is a GitHub event type and I would ideally like to just see the raw event type turn it from GitHub but if we do want to try to adhere to the cloud event spec and do a reverse DNS name on it I would refer that we if we're we're going to make one up because GitHub doesn't provide us one I would rather make it up like com.github.something and not a Knative name. What do you do in the case where two people write a GitHub source they both do this middleware part and they both make different opinions about how the data internally gets serialized? Say that one more time. Like two authors make a different middleware piece from an entity that doesn't actually make cloud events so now they get to say this was actually from com.github and it's a pull request but I've made some decisions about what that means because they weren't a cloud event and now they are and now you have two events on the on the in the cluster that are both from a source of the particular pull request and they're both of type com.github.pull request. Yeah those work as a receiver I will I will adjust accordingly because I control what's in my system basically and if I need to differentiate between those two I can look at things like the source no the source is the the entity that emitted that event that would be the pull request UUID right so there's no differentiating factor that you can look at in the envelope that tells you that that that middleware made decisions for you other than a custom event type so I want to make sure I understand you're saying that from the exact same potentially GitHub issue this event goes through two different pieces of middleware yeah okay I think that's kind of a weird setup but if so then that's the environment the user has chosen to install and they're going to have to figure out some way to deal with it ultimately like I said I don't think the end user should know that Knative is in the picture it's well and maybe you just no no no no no particularly about Knative can let me interject something which is I don't think that we want we want to be in the business of you surfing other people's nameless spaces exactly well I don't that's me at the secondary issue. I was just making something up. Like I said, I would prefer if we just use the event type directly from GitHub, even if it wasn't. No, no, no. I think it's a first order issue, which is we cannot, we should not be having other people name things, com.github. Yes. That is, you know, just being a good internet citizen, you don't, you don't use someone else's namespace, even though it is coming from, from that, from, from that service. Now, you can say, well, what should it be named? And is it that the person that is creating that, you know, has an event driver, that they might say it's com.github.myorganization.myrebo that contains the event driver? That might be something that would be more palatable. That's the source, URI. You can point all the way back to the original actual PR, if that, that's what's causing the event. I don't want to get too hung up on the that I usually actually choose there. All right, all right, that's the entire point of the spec, that's like, no, my point here is, if you have a piece of middleware that is wraparng an event, I don't believe it's necessarily appropriate for that piece of middleware to show up in the event type itself. I don't think it can't ever happen, because I don't want to ever say that. But in cases like this, it's not, it's not middleware. That's what I've been trying to tell you. It's the K need of source is an application that's doing something that has logic. It makes choices. Okay, okay. What we have stated about middleware is that it should not touch the thing if it passes it on as is. But if it is, if it can't, middleware can be both a consumer and a producer. So if it is consuming an event, and then producing an event, then when it's producing, it should put a different event, event information into it, because it is a new event. See, I would disagree that in this particular case, K native is the event producer, as we sort of defined it, I think it's more middleware, because it is not the one that is producing the event, it happens to be the one serializing it, but it is not producing. It's literally the one producing the event. It is an application that is hosted by your cluster that is going to take in a web, a webhook request from GitHub, that's its own format, make some choices about how it takes that piece and then forwards on more metadata about that, that that webhook request. Right, I think that's a fundamental dependent on the version of the library that we're using to consume the GitHub webhook, because they're super complicated and really shadowy. So I understand what you're saying. And I think this actually may be the real crux of the difference of opinion is you're viewing K native as, like you said, the event producer, the fact that the original data that we're talking about came from someone else is actually irrelevant to you. It's almost as if K native is producing it. Let me just finish. Because to me, I would like to get it to be more the situation where if I write a piece of code, and I'm receiving an event from GitHub directly, if I turn around and drop that code into K native, and I'm getting it from the K native infrastructure, if both K native and GitHub can both produce cloud events, I'd want the same code to work in both places as best as possible. Yeah, that that that'll work. Well, it won't work, though, because it doesn't work because GitHub doesn't produce cloud events. Well, but this goes back to a question I asked you on Slack. If GitHub were to produce a cloud event, I asked you if you would have the K native infrastructure still change the type to remove the GitHub, the com.github stuff and replace it with K native. And you said yes. No, I didn't say that. I said that it that would be a different, that would be a source that we're calling an ingress source, and it would just pass it through. Well, then no, that would be a choice. Yes. But if you if we used a GitHub event source, you said you would still want to change the type. If it's our code making a translation, but if the incoming request is a cloud event, then it doesn't need to change and we can remain all the keys stay the same. Okay, that's because that that if you produce the envelope, we own that envelope. And because of API incompatibilities in the future, because we are locked down into a certain version of the API for GitHub, for the web requests, we need to own that that are namespace for that application that's doing the the receiving and then the resending. Yeah, see, I think that that right there is basically the difference of opinion. I view this as middleware, and you view it as cloud native now actually owns the event. And I think that's just the difference of opinion. We own the code that that did that translation. Oh, I'm not I'm not disputing that that the K native own the code that did the wraparang. But I was viewing it more as middleware where it's just acting more as a proxy and get added extra things to help things along, but it it doesn't add things to make it look like it's the one that actually produced the event. I think that's a fundamental difference of opinion. But the source of the event is GitHub. So we didn't produce the event. We created the wrapper. Therefore, the wrapper type is labeled K native. Yeah, but but go back to what I said earlier as a as a end user of this event. I don't care that K native is in the picture. I want to know what the GitHub data is, not the K native data. But I think you're wrong because I think you do need to know because there could be multiple versions of GitHub wrappers. There could be many different opinions of how that GitHub data gets wrapped and sent. And in that case, you would like to know exactly which application is doing that the proxy work for you. Well, let me let me put it this way. Then if if the K native code is doing more than just in essence, being a proxy or a simple wrapper, and it's actually injecting logic such that as a consumer of this, I fundamentally need to understand what K native is doing because it's maybe you don't understand what K native is because well, let's see. But my point here is when I'm subscribing to GitHub events, I want GitHub events, I don't want K native events, then talk to GitHub and make them make cloud events. Well, by that, but see, that's the point, right? They're not producing it now. But someone produced this. Hey, there's wonderful little utility called the GitHub event source and K native who will do the subscription for me. Cool. I'm going to use that. Well, now I need to understand K native is you of all the GitHub events. And I may lose something in the translation. That is exactly my point. You need to know that that translation and that's and that's something I as an end user should not have to think about. And maybe I'm maybe I shouldn't be using the K native GitHub event source. You that's your choice. You have to consume the web hook yourself from from GitHub. Or create a different one that doesn't that doesn't pretend like it owns the data. Yeah, you would not allow that in the ecosystem because it's a liar. It's a liar. That's good. Anyway, Mark, there's there's the difference of opinion. No, I like it. I like it. I, you know, I think I think what would be worthwhile is to document some of these scenarios into a GitHub issue, or if we can get people to comment on them. I totally agree. I think I think this type of thing is a wonderful topic for the primer, especially for someone who's creating quote event producers or middleware or what even want to call it, because they need to understand what they're what we think the expectation should be of them. Right. Yep. But we don't agree. No, put the primer thing to tell people what to do. Well, members can't agree. Well, all right. First one to have the PR for practices. When I'm going to call my buddy and it's going to steal Doug Davis's laptop. Oh, my God. I don't I don't know that the person stealing your laptop is your buddy. Well, you don't know that maybe have his elaborate ring. Oh, there you go. This is the thing that always kind of worried me about these open space environments is especially in small startups, everybody seems to leave stuff lying around. It's not a big deal. And I always wondered how often things get stolen. And I never heard of it happening till now. So that's that's interesting. Yeah, it's it's very, very rare. Damn, like people will lost and found like a $20 bill. That's honesty. I like that. Yeah, it's it's just like occasionally somebody, you know, every once in a while, someone will like smash through a window or tailgate through the door. And like, usually they get caught. Sometimes they just steal a sandwich. And then sometimes they apparently snatch laptops. So was your laptop taken like during the night or during the day during the day, like when everyone was coming in in the morning, somebody crap, you know, that takes a lot of guts. Yep. Right. Gotta be confident. Man, I would freak out with was everything on your laptop backed up? Yeah. Yeah, everything is, you know, like I didn't lose anything. Okay, that's good. Because I think I would probably do something if someone took my laptop. So I lost a bunch of cool stickers. Well, that's all that matters. I'm never going to get that OSB sticker back. I do have some extra stickers if you want for the OSB stickers, I don't remember to bring them. Send them a cloud events one as well. Yeah. So Scott, was it a real laptop or was it a Chromebook? No, I have a MacBook Pro. Okay. Okay. Interesting. Man, that sucks though. Yeah, okay. So anyway, back to work related stuff. Anything else you guys want to talk about? Well, we are pretty keen on solving this difference of opinion about the what goes in the header value for HTTP binary encoding. You mean the quote thing or the issue we were just talking about? Yeah, yeah, the quote thing. Okay. I can't. I saw, I've been on phone calls the last couple of hours. Did Clemens submit something? Like he pinged me. I didn't see any update, but that wasn't, I haven't checked since last night. Okay. No, he did. He did something this morning. I just don't know which issues he touched. Hold on. Let's see. He pinged me. Okay. Yeah. He said he just, he just commented. I'm sorry. He modified the data encoding, I'm sorry, the data content encoding PR, but not the other ones yet. So we don't have an update on that one. Although we could ask on the call what his current thinking is. Yeah. Because like that, it changes how that particular encoding works in the SDK. If, if indeed we have to do something silly like quote stuff. Yeah, I'm really hoping we can find a way to get rid of the quotes. That just seems so bizarre to me. Or we can, we only quote things that are unknown to the spec is a, a workaround that seems icky, but okay. I don't know. We'll see. Okay. So the other thing I guess we should talk about at some point is we talked about doing sort of SDK, interrupt thingy, but I'm having really hard time getting notice from the other SDK authors. Yeah. What are you going to do about it? Yeah. I know I'm trying to, I mean, you need to do have a more, more direct formal reach out. You guys are removing people's permissions. If they don't notice, I'm not sure whether or how I should take that. It might be really. Okay. So people are starting to join. So remember switch topics. Okay. You know, taking a look at the repos. SDK jet dash JavaScript was updated eight days ago, 16 days ago. Pipe on a lot back. Yes. Anyway, I mean, there, there's been fairly recent activity. Yeah. Yeah. But the problem is we don't know what that activity means unless you actually look at what's in there. Right. For example, I'd be surprised if many of them support multiple versions of cloud events at the same time the way you guys have done on the go SDK. And if we want to do interrupt cross versions, then we need to get people to sign up to and they probably read it. So, so maybe if we can get some of the authors online for the next meeting of this, yeah, we can talk with talk through differences. Yeah, I'll try to send that a nagging note for people to show up the next time. Is it a camera is it next week or two weeks? Two weeks, two weeks. Okay. Okay. I'll try to get you ready to join in two weeks. Also, I'll be on for the first part of the cloud events call that I need to bail for a first session. Okay. Sounds good. But you've heard my voice. I have heard your voice and I got down there already. I say, oh, I'm going to start sharing my screen. I forgot. Bonjour. Hello. Did you really mean to claim Doug that you don't have time to go over the world and knock on everybody's doors and ask them the questions by SDK? I'd like to. I can use some more frequent fire miles. This year, so far, I've only done it on one trip, so I'm not planning on keeping my status, unfortunately. I mean, there's bigger problems than usually in life than keeping your frequent fire status though. Yeah. All right. Let's say I heard Clemens. Sandeep, are you there? Sandeep. Hi, guys. Hello. All right. I got you. Thank you. Pester Eric, are you there? I'm here. Good morning. Good morning. Mathias. I'm here. Hello. Okay, I think it's everybody so far, so now I get to get organized. And no, Scott, I have not eaten lunch yet. So you've been warned. Doug Davis, punchy. Yes, very, very punchy. But I'd be a lot punchier if I was in your situation, having my laptop stolen. That's just going to disturb me all day now. Well, the silver lining is that now the new one is faster. There you go. Okay. Way to look on the bright side. Mr. Berger. Hey, I'm here. Hello. Hi, Adam. Good morning. You guys are way early today. I started recording as soon as I shut up. Somebody just joined up. I know you're not. So Clemens, you mentioned through Slack that you did not get a chance to do anything except update the data content type PR. Have you given any thought, though, to the issue that Adam opened up about the quotes and the headers? I was wondering if you wanted to talk about that one at all. Well, except for the discussion that we already had, I have not given it further thought because I just didn't have any time this week. I've been here and my days then end up being randomized and full. Okay, that's fine. I just was wondering whether I should put on the agenda or not. So yeah, I mean, here is weird to say on a call, but like, yeah, it's in Seattle. Yeah, I figured that. All right, David, are you there? Sorry, good morning, Doug. How are you? Good morning. Good. And Ginger. Ginger, are you there? I will circle back around Ginger. I am. Sorry. There you are. Too hard to find in the unmute button this morning. I've been there. I hear you. Hey, was that Colin? Yep. Oh, excellent. Got it. Thank you. And Chris, stop for you there. Yeah, I probably have to leave early today. Oh, then we'll jump on your issues first then. Jim, are you there? Yes. Hello. Hello. Mr. Doug, are you there? Yes, I am. You're next one. Yep, I got you. Thank you. Roberto, are you there? Yes, good morning. Good morning. William, are you there? Yep, I'm here. Hello. And Tam, are you there? Tam? Hi. Yes, I'm here. Oh, excellent. Thank you. Good morning. Good morning. Hey, Kathy. Kathy, are you there? Yeah, I'm here. Hello. Hey, it's been a while. I think that's a lot of mirrors we just joined. That's right. I joined. Thank you. Excellent. Thank you. And Jim is also connecting. We have some network issues. Okay, not a problem. Thank you. Let's have a minute or so then we'll get started. Who is... There's a really funky ID popping up in Zoom. Starts with H-I-A-U-F. Who is that? He wants to say silent. Hey, Jim. Good morning. Good morning. Someone named M. Chuberka just joined. Are you there? Okay, let's go back around later. All right, let's go and get started three after the hour. So in terms of action items, I think the only one that's really kind of nagged where they put it that way is this one for Clemens, which we already kind of poked them on. So Clemens, when you get a chance, it would be good. There's lots of anxious people out there for that one. Yeah, I know. Yep, thank you. It's always me in the whole world. I know, I know. You're such a troll maker. Let's see, SDK subgroup. We actually had a meeting right before this one. We actually didn't talk about really very much related to SDKs. Other than, actually, Scott, do you want to talk about some of the cool features you just added just to put pressure on everybody else? Yeah, so I've been working with adding open census to the Golang SDK. So it doesn't expose metrics, but it collects them. And then it gives you a way to set up an exporter in whatever happened. You happen to be running. So I have been adding a little example that exposes using Prometheus, and then it shows traces using the log to console noise. But you choose correct exporters, and then you can start collecting traces and metrics using the SDK. And because of how I use context is you can actually trace all the way through your application and the SDK, and it should show up in the audit collector as one trace. Wonderful. Jim, you on your hand up? Yeah. So just a quick question on that. We're slightly off topic from cloud events, I guess, but we're going through an internal discussion around open sensors versus open tracing. Open tracing is backed by CNCF, isn't it? So I'm curious of the... And I'm not criticizing. I'm just trying to understand whether it was a CNCF project. We're meant to be more aligned with other CNCF projects. As far as I understand it, open census allows you to export to open tracing. OK. So open census is more of a generic collect the data and then you give it exporters that export. So Prometheus and you should look at the list. There's like eight or nine that come to fault and then you have the ability to add your own. Yeah, sure. Maybe I'll follow up with you offline just so we're not hogging this conversation. Yeah, sounds great. But just to focus on your question there, Jim, and this is strictly my opinion. My interpretation of the CNCF projects were, yeah, it'd be nice if they, if other CNCF projects use the existing CNCF projects, but it's definitely not a hard requirement or anything like that. Each project needs to use whatever it thinks is best to get its job done. It's sort of in my approach. OK, thanks. Yeah. All right. Moving forward then demo work. Doug and Scott, do you guys want to talk about where we are? I guess maybe more Doug, since you've kind of been taking the lead on the airport scenario. You want to talk about where we are on that one? Doug, are you able to come off mute? There we go. Sorry, can you hear me? Yes, we got you now. Go ahead. Too many mute buttons. Yeah. Well, we had a lengthy conversation, I guess, last Monday. Yeah, a few days ago, four days ago, Monday. And the original airport demo proposal involved issuing tasks to attendees within a few different roles, role of passenger and a driver, you know, the things that were involved in a supply chain scenario. And then when Clemens participated in that call, we started to steer the demo more towards notifications rather than tasks so that the processes that would be handled by microservices and that in the orchestration of all those events related to ordering and order fulfillment would be more automatic, handled by robots. And so the attendees would be connecting into the various cloud nodes that barely represented those systems that were involved in that process. And they would be getting notifications based upon just the traditional PubSub model. So the demo deck had been revised to reflect notifications instead of tasks, and Clemens wanted to review that and we're waiting, I think, for that review. All right. Clemens or Scott, anybody want to add something? He's been on those calls. Yeah, we have the discussion that we had was that I've been making that objection a few times in a few demo scenarios where we had this conflict between what messaging really does where you would use a queuing system, where you effectively assign tasks and then parties take the task of a queue, but there can only be one party that can recently execute that task and then also goes and provides feedback. That's the kind of correlated communication path that we scoped out of cloud events, but those always exist in these kinds of scenarios. So I pointed that out, but there's in that scenario, there's plenty of paths where what we've done so far in cloud events actually fits the bill well. So that was the main feedback. And so we're kind of restructuring so that we can go and show appropriate views of cloud events, of reporting out facts, and then reacting to building sensibility based on that, and then have kind of the core flow of the workflow really be a workflow that's more traditional, you know, messaging substrate under the commas. All right, any questions or comments? Okay, so just let you guys know, I just now sent out a reschedule notice from this Monday's meeting to the next Monday. So I forgot to set it up so we could talk again on Monday. I'm assuming you, Clemens, you'll have a chance to look it over and do whatever edits you were thinking about making by then. Yeah, absolutely. I've just been, today's my travel day, which means I'm going to be able to go and get something done today and tomorrow too. So yeah, so we'll be able to do something. Okay, cool. Thank you. All right, Scott, was there anything you wanted to add to that? No, that was pretty much it. Okay, cool. All right, in that case, moving forward, KubeCon.edu, I did get confirmation, I think it was either yesterday or day before, that we got our two 35-minute sessions for cloud events, intro and deep dive, and then one large 85-minute session for the serverless work group itself. I still don't know quite what's going on with the serverless practitioners summit. I need to go back and ping Chris Anacic on that. So is it possible that our serverless session gets moved into that or make it separate? Don't know. But I think as far as I know, they are still planning and having this co-located summit, we just don't know what's going on with it yet. So I'll keep you guys abreast of that if I find that anything. Of course, if you guys hear anything, please speak up. I did hear that there's going to be an announcement on Monday. Oh, cool. Okay, excellent. And as far as I know, it's still being planned and it's being planned for day zero of that Monday. Yep. Okay, excellent. Thank you, Scott. Coup con china and nothing new there. And then I did put in a request for the basically the same three different sessions that I did for EU. I'll let you know when that happens. All right. So let's jump into PRs. I don't see Rachel on the call. However, I know she did update her PR based upon the vote that we had two or three weeks ago, camera how long ago it was. So let's take a look at what she did here. So basically what she did was based upon the vote. She added a section in the primer that talks about proprietary protocols and what we're doing with them in terms of additional specs. And then she created a placeholder document to point to those specifications that live outside of our repo. But the key thing is probably this section right here talking about the the text itself. I'll leave you guys a second to read that since I'm sure probably some people haven't had a chance to read it yet. Looks good to me. Yeah, that's a good question. Yeah, looks good to me as well. Yeah, obviously it's in the primer. So it's not normative anyway. But I think it pretty much lays out what we agreed to, which is we hold no responsibility, but we're going to point to it for easy, easy reference and easy finding. Anybody have any questions or concerns with this direction? Does anybody feel like they need more time to think about it? Or if not, I'll call for a vote. There's a typo in the proprietary specs that should follow the same format at the other specs. What line number? 319. It should say the format as the other specs. Okay, I will make a note of that. Hold on. Line 319. Okay, thank you. I can make that change or talk to Rachel about it. Okay, anything else? And the other and the words are the wrong way around. So I'll say the other. Wait a minute, say that again. The proprietary specs should follow the same format as... Oh, the other. Got it. Okay. Okay. I'll get that fixed. Okay. Anything else on this one? Anybody want more time to review it? Okay. Any objection to adopting it with the wording fixed? It was just a redress or brought up. Okay. So... With typo fixed. Oops. Okay. Thank you guys. Kristoff, which one of these is ready for us to talk about if either of them? Or which one do you want to talk about? Basically, so what we settle on was we want to have a minimum event size that is supported by all consumers of cloud events. And now I prepare two options. The first one is open since a bit longer. So that's basically just says the cloud events or the cloud event in JSON format must not be larger than 64 kilobyte. And then you have it or everybody has to accept the cloud event that in JSON is 64 kilobyte. So the problem with that is if you have a different format, then you don't really know. So if I'm sending it over MQTT or MQP or Protobuf or whatever, I don't really know what the event of the size of the JSON will be. So I would have to take my cloud event out of the one format and then encode it in JSON to know if I'm compliant or not. Personally, I don't think it's always a big issue if you know your event is small enough. So if your thing is like 10, 20 kilobytes, you know in JSON, you won't go beyond 64 kilobytes. But I was criticized. So I made the second PR, which is then independent of JSON and it tries to describe the size of the event independently of any encoding. So it gets a little bit more complicated. But then on the other hand, it is, I would say this, it's more easily applicable and maybe it's easier to measure. So also for a producer, you can more easily check that you're compliant with it and maybe you only. So basically, there are a lot of rules for the context attributes, but I think most people will automatically follow them so you can sort of ignore them. And then the only thing you have to look at is the data attribute. So I think in practice, it may be easier for some. So these are the two options that I laid out. Clemens and others made a comment that they think it's also okay to just say, here's 64 kilobytes and whatever format you use, just make sure to stay under 64 kilobytes and that will leave some gray zone, but we're fine with it. My response to this is that I'm not fine with that. So I come from a commerce break ground. So my customers would use those events to maybe charge a credit card or ship a package or whatever. And I cannot go and tell them there's a gray zone and if your event happens to be in that gray zone, I won't deliver this event to you. That's just something that doesn't fly. So for me, I want to have really a guarantee where I can say, I'm sending out this event and then I know that every compliant consumer will forward this event, whatever you put in between. So this is what these two pull requests try to achieve in two different ways. Okay, thank you for the summary. All right, so we need to open the floor for discussion. Who wants to voice an opinion? Oh, come on. My observation that I made was that with current messaging infrastructure, even with multi-protocol brokers, so if you look at a big example and say active MQ, that can also cite service bus or even event hubs. What you'll find is that, so those are two being ours, we have a limit and that limit is total message size and it applies to whichever protocol you come in with. And so for service bus is one megabyte and whether you come in for HTTP and whether you come in through AIMCP, that's the frame size we support. And whatever stuff you put in there, including all the properties and all the payload, needs to fit with that. And if you want to combine the routes with an active MQ broker and you put a pump in the middle, well, then you make sure that the messages can't exceed one megabyte. And ultimately I think it's about the party who goes and configures the overall system, who needs to make sure that that all fits together. But I'm not sure we can really go, like I don't know how this section really helps in staying under or above a limit because ultimately when you make this normative, then you're forcing every intermediary to go and do all the byte counting and that's costing perf because we're actually, so in our infrastructure, we're forwarding cloud events and we're parsing out stuff that we need for routing. But we're not policing individual coders and probably wouldn't because that's just costing us too much work at scale. And so I don't see us enforcing those rules. Can I respond to it? Yep, please go ahead. So it is not a rule that you have to check these. The only thing that you have, if you look at the last sentence, or the second last sentence, cloud events, consumers may reject events that do not follow these rules, but no one forces you to reject them, right? So you can still go and say, my limit is one megabyte for whatever you send in, I just accept one megabyte and that's good for me and you're compliant with that. Well, but see, I'm building generic infrastructure and so because I'm building generic infrastructure, I need to go and provide a switch that then goes and forces the rules and to be completely protocol compliant. And so I don't know how this feature, which because I see this, if I see this as a feature request on my infrastructure, how that actually helps because ultimately it's about the event fitting into the frame size because that's what the protocol gives me, right? There's an HTTP gateway and that HTTP gateway has four buffered events, a certain size limit and there's an HTTP, there's an NCP frame size that I can't go over and that's ultimately what governs the threshold for what messages I can support. And all of that math here is not going to help me to truly enforce this. What matters is that when you show up as a publisher and I constrain the frame size on the NCP frames to 64K, you need to stay under 64K. So, Jim, you have your hand up? Yeah, so I guess I was sort of coming to the same conclusion that Christoph mentioned that. I sort of see this more as a compliance proof than a runtime constraint. So as an infrastructure provider, you could run some compliance test that ensured that events constructed to this spec actually flowed. And I think that's all we were trying to get to from this spec. It was trying to say as a source of an event which is then going to travel across potentially multiple people's infrastructures over a number of intermediaries to get to some endpoint. Christoph's point, it should be able to leave one place and arrive at the other in the same state and not be rejected along the way. So it is more of an intermediary compliance statement, I think. All right. Anybody else like to speak up? Okay. I'm not sure how to interpret the silence because it could mean no one cares at all, except for the people that spoke up, or it could mean you guys are perfectly okay with yes, silence, acceptance, yeah. That is one way to view it. I was getting that. And so I'm trying to figure out how to move forward on here because I mean, ultimately, if we don't move from our current position, I think the only choice is just, okay, put up a vote. And we could definitely do that if that's the next step. I was hoping to get a little bit more back and forth from people to see how they feel about this, though. So no one's raising their hand, and Jim, your hand is still up, by the way. Christoph, let me ask you this question. If we were to go forward with a vote, would you want to put both PRs into the vote, or would you like to choose just one? Well, I don't have a strong preference either way, to be honest. For me, what really matters is that I have this guarantee that some sort of size will be accepted by everyone, and I don't really care too much how that is being made up. Yeah. So I don't have a strong preference. So of course, since commented, they prefer number two. So then that's okay. Yeah. The reason I'm asking is because I know last time we had a vote, we had, I think, three or four different choices in front of us, and we did that voting style at the start of the seat. I can remember the name. But I think it's always easier when people are faced with two choices, and unfortunately, right now, I see four choices in front of us. I see your two choices. A third choice is do nothing at all, leave the spec as is. And then the fourth choice is the one that you mentioned that Clements mentioned, which is just say what the max size is on the wire, and you don't try to get fancy about it. It's just whatever protocol you're using, you got to stay under 40, 64K or whatever it is. So I think I heard four different choices out there. And I'd like to see if we can try to narrow down the choices if possible, so we get to more of a Boolean choice. So let me ask this question. Clements, if Christos memory is correct, and you had mentioned this option of just saying 64K, regardless of protocol or binding, is that a concrete proposal you'd like to put forward, or was that just the thought that you had at one point in time? So yeah, I would want to propose that. Basically, as I commented, like the last comment I made on this TR, not much wordier than this, it's like, should be, I don't know exactly what I wrote, but that's for me, that's the actual alternative. I would like to have a limit, but I wouldn't like to have a limit that is just really as simple as frame size on wire. Okay. So let me ask. Not a limit, it's a minimum supported size. So that's the real difference. Like if you're proposing a limit, that is something else. Well, that's what I meant. So minimum supported size, that's what I meant. Okay. Okay. So let me ask this question. Is there anybody who thinks that we should do nothing at all? Okay. Since no one spoke up, I'm going to take that as everybody on the call thinks we need to say at least something in this space, which means we can eliminate the do nothing choice. Is that a fair conclusion based upon this silence? Speak now if you don't feel that way. Okay. So we're down to three. So that's good. So between Christoph, between year two, as you said or as you pointed out, some people in the chat are saying they prefer number two as well. Would you feel comfortable with just choosing between your number two and the Clemens proposal that I'm hoping he would write up at some point, or would you want to keep your number one in there as an alternative? No, we can take my number two. Okay. So I think what I'm hearing is if anybody objects to this, the thought process here that I was walking through is once Clemens writes up his proposal, people can then choose between this number two that's on the screen right here from Christoph and Clemens proposal, which is just 64k minimum size, regardless of protocol. Is that what we're going towards? I believe that's true. I think so, Doug. Okay. Just want to make sure. Okay. Because I don't want people to feel like I'm forcing a decision down their throats by excluding something. So if that's true, then Clemens, do you think you'll be able to put something together to help people to look at? Yeah, that is so easy that I will probably get this done today or tomorrow. Okay. In that case, I'm trying to think about our process. I don't think, I think formal votes technically have to go for a week. I don't think we've ever really started a vote before that didn't start and end during a phone call. So I'm a little bit nervous about starting one, you know, like say today or tomorrow. Unless you guys feel like that's okay, I'd almost rather wait until next week. So Clemens, you can do a little talking to your proposal and then start the vote next week. Is that okay with people? Or do you guys feel like no, we need to get this resolved immediately and we should start the vote sooner rather than later? Okay. I'm not hearing any objections. So Clemens, sometimes just go ahead. Just saying a second. Sounds good. Okay. Okay. So Clemens, you have a little bit more time than if you can just make sure your PR is ready in time for next week. Yes, sir. Yeah. We can kick off the vote. I mean, to be honest though, if you guys offline start LGTM in one of them like completely and the other one gets no compliments at all, that may need we take a voice vote during the call because no one's really speaking in favor of the other one. So think about doing some offline discussions if you can. But worst case scenario, we'll start the vote next Thursday and then they'll run for one week. Sound fair? Yeah, we'll do. Okay. Cool. Thank you guys. I appreciate that. I'll make some notes in the meeting minutes about that. Kristoff, is this one ready to be discussed? I can't remember. I thought maybe... No, I wanted to make it an extension, but I didn't have time yet. Okay. That's what I thought. Okay. All right. This one, I don't think Alan is on the call unfortunately, but I wanted to bring this one up because I can't remember for sure I don't know where he's going, whether we actually discussed this one or not or if we did discuss where we landed. So basically Alan is proposing that we uniquely identify events based upon source and ID put together. And I know that there has been a little discussion about possibly pulling in type as the third part of that tuple, but I wanted to get a sense from the group as to which way you guys want to go with this because I've heard some people say we can't do anything at all here, and it's still an even try. And then I've heard other people say, no, we need to at least do something here so people can do dedupe being or something in this space. But I want to get a general sense of which way you guys want to go as a group. So let me open the floor up for discussion. Again, I'll just say, I think we need some statement around this. Otherwise, everyone will go off and do their own thing. And when you say do their own thing, can you elaborate a little on what you think they're, what exactly is the problem they need to solve that this will address? So I'm assuming the drive here is to enable idempotent message or event processing. So if we don't make a statement as to what we think an idempotent key is, then every publisher, consumer pair will come up with their own way of doing it. Okay, thank you. Does everybody agree that that is the problem we're trying to solve? Yes. Okay. Does anybody think this is a problem that we should not be solving? What I'm trying to do is eliminate the do nothing choice. Does anybody think we should do nothing? Okay, because I could have sort of heard somebody say that in the past. So, okay. So not hearing an objection, we're going to do something. So then I think the question then comes down to something like idensource or idsource and type. Because I think at least one person in the chat said they think types should be included. I think Scott, you're in that camp as well. Yes, I am. Okay. Would you like to speak to why you think type needs to be included? If, well, okay. So let's say the example of there is an entity that takes it in the event and then it produces two new events of two different types, but it wants to reuse the id like an event split. I could see that as a useful apparatus. Okay. Anybody want to speak to that? Kristoff, your hands up. Yeah, I have a slightly different argumentation why I think including type is a good idea. So currently in the spec, we was only should, well, we say the type should contain a reverse DNS in front of the type name. So I think this is the only place where we really ask for this sort of internet unique URI thing. On the source, it's sort of optional. You can do it. You don't have to do it. And if you now make it only source an id, then we effectively doubling the urn twice because then if you want to have the duplication properly, then you have to include your URL basically both in the event type and in the source. And I think that's a bit unfortunate. I mean, you can still do it if you want to, but it's not necessarily a good thing because right now the source to me feels like it's not necessarily a real URL that you can actually call. It could be just something you made up. So in terms of sensors, it could be just a description of your sensor, but this IoT device doesn't really have a URL that you can call. But we've explicitly excluded that. There should actually not be a callable thing in that entire event. Yeah, exactly. But then what if you sort of say it's the source and id and those make the duplication then you more or less force people to put the URL into the DNS part into the source because that's the only way you can make sure that it's actually internet unique and then you end up with something that looks like you should be able to call it. So the uniqueness requirement that we're talking about here is how can we uniquely identify this particular event instance? And since the ID shouldn't always have to be good but should be able to be a little bit more compact, you basically need to have a reference to where is that ID space coming from? And you qualify that ID space with a URI. But URI is yet so that identifies the source but that doesn't have to be anything that's callable. It's just something where you assure in some way to make that thing unique. And there's rules in how you can go and create a URI. For instance, using your domain name that is assigned to you either as a URI or as a URI with some scheme that then scopes to you. But that's the same thing as you would do for let's say in an XML namespace declaration or a namespace declaration elsewhere that doesn't need to be a callable thing. It's just a URI that can be reasonably assumed to be unique to your source. Yeah, I'm not disputing that. I'm saying we do that once and we do it for the type. That's where the spec currently says, please use your unreversed sit and then put it there so that your type is hopefully internet unique because no one will steal your domain. Yeah, but that's a different thing. The type is something that classifies that effectively refers to implicitly, refers to the schema of the event that you're about to process. So that is like the whatever the raise alarm type, right? And that's that raise alarm type by a device from device maker X might be applicable to many kinds of devices and many types of devices. And it's just a uniform way of how you express alerts and that implicitly refers to a schema and probably even more explicitly than further down in the event refers to a schema URL. But that is that is distinct from the actual concrete instance of a device that raises the event which is the source ID, which then has an ID space, effectively a unique identifier space that it emits because it has an internal counter that it then stamps those events with. So the event ID is relative to the concrete instance of the source URI, so the concrete instance of device. But I see that as completely orthogonal to the event type. So my hands up and I'm asking these questions that's as Doug not as moderator. So Scott, the sample you gave of doing event splitting, it seems to me that what you're really doing there is overloading ID to be almost like a correlation ID at that point. Because if you're taking the event and splitting it, I would almost expect you to create two new IDs at that point. Let me take my example back. So going to something we were talking, Doug and I were talking about in the Knative project, we have these things called sources and we make the choice that we use the Knative name in the type. But let's say you have two applications that are bridging non-cloud event webhooks into cloud event webhooks. And there's two versions of that thing that both pull requests from GitHub, they act on the webhook. The webhook has the cloud event ID that you can pull out of the body because that's a known thing. The source is the resource that GitHub is talking about and the only way you can dedupe the two events that come in is if you understand the type of the application that did the bridging. So one would be Joe's adapter dot pull request and the other one would be Mike's adapter dot pull request. This is a very dangerous conversation because this is going to get into should type be from the original event producer versus this adapter that you're talking about and I didn't want to go there quite yet. But I guess the way I kind of look at this is in the spec we already say ID needs to be unique within the scope of the producer. And to me, the spec version of quote scope of the producer is basically source. That's the only thing we have that comes close to fitting that bill. So that's why to me, ID and source pretty much need to be unique because I don't know what it means as a receiver of an event. If I get two events with the exact same source and ID but different types, I don't know how to, I would not honestly know how to interpret that. I would almost look at it as one of them was a mistake. So that's why I get a little confused when we, when the spec goes out of its way to say ID must be unique. Well, then what does that mean anymore? Right, if we can't guarantee it's unique this, we have to count on something else. It seems like it's a, it just feels a little weird that we're saying ID is unique, but not really, right? So anyway, that's kind of why I'm looking at it. Tapini, your hands up. Yeah, just to bring back your example that you pulled back, I would think there's a valid case for event splitting if you have, let's say, poor request version one and poor request version two of poor request created version one and version two. Would you then also need to have different IDs? Because an example of an ID, there is, for example, a database commit ID. And if you had two versions of an event for database created, I would expect them to have the same ID then. I think the event has a different ID and the source points to a unique, that unique resource, and the source would be the same in those two events. The source would be the same. The type would be different, but would the ID be different? If yes, then that's a bad example of event ID, the database commit ID though. Yeah, the ID identifies the event per se. It's for deduplication. It's for handling in the infrastructure. I don't think it necessarily is. So I think if you have a database commit and you want to raise information about the database commit, then you might want to go and raise that about that database commit, which means that might be in the source. This actually goes back to the discussion that we had many months ago between the source and the subject, because that's actually the thing we're missing, I think. It is, I have subscribed to a source, but that source has kind of sub-elements that it raises events about. And we don't have a good way to express the about in the structure that we have, like collapsing everything into the source, which we did based on that discussion is causing that. So the way how we in Event Grid represent this, where we have kind of the source you subscribe to, and then you have further information that is in the source URI, is we use the pound anchor and then put all the further information on the right side of that, but we attach it to the source URI. So your database transaction ID would actually be part of the source. So Evan is asking a good question. I think it's relevant here. If there is one occurrence, but two events are generated from that one occurrence, and the example he gave was something happened in a database and that created a create and a write event from that. Do they both have, do both of those cloud events that get generated have the same ID? I would argue no. I think they have the same cause, and that's expressed in the, currently we have things that's collapsing to the source, but the events are distinct because they're also distinct types. Yeah, that's where I tend to land as well. There are two separate events. One was a create one to write, but they're related to the same event, I would say the same occurrence, but they're two separate events. That's where I might have done as well, but Scott, I'd be curious to know your take on that. I would agree. I think those are two different IDs. I think this really comes down to when it's a two different producers producing events based on some other non-cloud event event that's happened that can have an ID associated to that. Like two entities watching the database write and they're both emitting events for that source. You can't de-dupe that if you don't know the type. But in that case, whatever the thing is that's emitting that event is the source. Isn't that what we're really saying? And when you do that, the source and the ID become a unique pair, a distinct pair. I believe that's true. Well, that's assuming you don't use the database transaction has an ID and the emitter could take that ID from the transaction and use that as the cloud event ID. And that either needs to be explicitly disallowed by the specification for cloud events or it needs to be a tuple that includes type. But that emitter is a proxy for the database, isn't it? So that transaction ID is only ever going to occur once. Yeah. But there are two things that are listening to that transaction. Okay. So they would listen to the event, yeah? No, they listen to the database directly. Would you expect those two things to identify themselves using an equivalent source? I would expect them to have exactly the same source and ID but different type. I guess I always kind of assumed that the event ID was almost like cloud event specific. Yeah, I think so too. So I think this, so let me try to really, I think I'm happy about this discussion happening because it actually points to, I strongly believe still that this points to the fact that we have a missing field and that is subject. Because the source as we generally understand it is equivalent largely to, and sorry if I have to say that word again, to what we call topic in PubSub. And that is, it is the scope at which you just subscribe, right? You register interest in a particular source that is emitting events to you. At that level, you make a subscription. And then you're interested in events of different types and each of those events, you need to be able to tell a part and you do that with the ID. And yet, but the ID is basically just a discriminator and you shouldn't have to go and interpret that in any particular way, but it really just acts as a key. What the missing information field that we have, I believe, is this effect of the sub-element inside of the source that this event is in particular about. So I'll give you the, I'll go back to the initial example that I talked about when we were discussing this about the blob storage, right? You subscribe to a storage container. You don't know which files are going to be created yet, but you know that you are interested in files that will be created, deleted, et cetera in that container. That is your source. And then a file gets created and you get that raised out of that container, which means that is your source. You have a blob created event. That is your event type. And then you have an ID, which is an artificial identifier, which distinguishes that from another blob created event. And then you still need to have a field that says this is the name of that file. This sets this apart from other events that call blob created because you need to be able to separate those two. And what we currently do, because we don't have that field, in our implementation is we put this pound separator behind the source URI so we can separate out the original source URI and this extra information in some way. And I think that's a missing field. And I've been saying that in the beginning and then we decided to consolidate initially what we had four fields into one field that there was a source. And I still think that's a mistake. I still think we need to have an extra subject field. So I understand your reasoning there. In that scenario, wouldn't that file name or whatever you call it actually be an attribute of the event itself? What I'm saying, well, how do you... Because to your point, the event was emitted by the container, not by the file. So the source is the container. The ID is the discriminator for that event within that container. But then the file name is actually an attribute of the event schema, I would argue. Yeah, so the reason why I still like... There's a practical reason for why I still would like to have that extra field that is filtering. Because if you watch for certain cloud events being emitted from that container and you're really only interested in whatever JPEG files, instead of having to know that particular schema of the block created event, you will want to have a field in the metadata that you can go and apply a suffix filter on for... I want to see all the events of the following type where the subject field has the following suffix and that is .jpeg. And then if that's true, then I can go and dispatch the event and otherwise it won't. And so that gives you a generic way, a generic filtering capability that's cheap without having to dig into the event itself and opening up the data and parsing it and et cetera. One discriminator that gives you effectively a subscope that you can go and execute cheap filter expressions on for routing and for selecting what the subscriber really wants. And that's what we need. That's what we use that pound sign extra thing for. So sorry, wait, I was on mute. Sorry, let me go to the speaker queue because I know Scott, you raised your hand and my hand was up there too. But let me just say one thing. Evan, thank you for pointing that line out, the database commit ID possibly being the ID. That kind of got me wondering whether we need to be a little more crisp on what this ID actually is because to be honest, I'm actually thinking database commit ID actually is not an appropriate thing to use there because I would think that this field would actually be inside the data someplace because that's application specific data. And while I'm not saying you, we need to ban this kind of use here, I tend to think of the ID as more of just something unique to distinguish this event from other events, not something semantic specific to the application. It's more just I'm just spitting out a stream of events and I need to say this is number one, this is number two, this is number three, and you're not going to get the same number more than once. It has no correlation to the actual data being processed. And by using things like commit ID, you're actually overloading the semantic meaning of this field to no longer just being something that you can use for dedupe purposes, but now you're actually putting additional meaning behind it, like for example, in Scott's use case, doing some sort of correlation ID between the events that he's doing splitting up. And I'm not comfortable with that kind of overloading of semantics, I'd rather have a very clear single definition and single purpose for any particular field we have in here. But that's just my take on it. So Scott, I think your hand was up next. Yeah, a huge plus to what Clemens was just talking about. We're running to that same problem with Knative where we would really like to be able to filter on certain things like the bucket, but not necessarily the UUID of the blob that's inside that bucket. And right now there's just no good way to take a look at source and be able to split it in half unless you do something very special, like Clemens was saying that they're doing. Like you can't do that as middleware. So if you want to have middleware that does filtering based on the service of the source, not the entity of the source, it's very difficult. Okay, so we're running a little long time. I don't think we're going to necessarily solve this one here. So a couple of things here. First, there are a lot of really good comments made in the chat and voice about this issue. Please put your comment to the PR so we can continue the discussion offline. But to Clemens, it sounds like you want to open up an either issue or pull request to add another field. Can you get that one out there to is kind of related to this discussion? Yeah, so what I'll do is since we have so much material on this in the already in the repo, I'm going to open up a new PR and then I'm going to point to all of this. And we have actually the deck. So in the repo, there is a section where we have a few PowerPoints. And there's one that talks about topics and stuff. Clemens, do we lose you? I think you want to mute, Clemens. So there's in the repo, there is a PowerPoint deck somewhere. If you could go and show that to all of us. In, I think, share. And it's the topics and subject ones. That thing is actually fairly deep and it has a narration. So I talked to that thing. So if you open that and if you are in possession of Microsoft PowerPoint, thank you very much. And then you can actually go and put that into presentation mode and you can hear me speak to that topic where I make a very passionate argument for why we should have two fields. So if anybody has time and interest, you can go and look at it in parallel. I will make a PR and basically propose the subject field. Yeah, down here's the video and the PowerPoint. Yeah. Okay, so please put comments on there. Before we go back and just do final roll call, there are a couple of PRs here that I think are actually ready for discussion. I have one on what is optional. And Gem opened up one. I think, Gem, there are a couple of minor typos or syntactical things I'll make comments on. But I'd like to get people's opinions on whether they're okay with that general direction. So please look at that and comment on the PR itself. I think it's pretty much ready to go. We just need people to look at it and say yes or no to it. And with that, let me go back and do final roll call. Klaus, are you there? Yes, I'm here. Excellent. Whoops, I keep doing that. Mehmet, are you there? Mehmet from Verizon. Okay, Mr. Kowski? I'm here. Okay, Evan, are you there? I was mute. This is Mehmet Toyam here. Excellent, thank you. Okay, Evan, are you there still? Yes, I'm here. Okay, and M, oh, I think he's not on the phone anymore or whoever that was. Is there anybody I missed on the roll call? Christian here. Hello. Christian, got it. Okay, thank you. Anybody else? All right, cool. Thank you guys very much. Very lively discussion today. It was really good. Just please put your comments into the PRs that are out there and we'll talk again next week. Thank you, guys. Thank you all. Thank you. Thank you. Bye.