 Let me just get the other Doug. Well, his mic's gonna take a while. I'll circle back around. All right, why don't we go ahead and get started? Short attendee list today. Let's see, do community time. All right, any community related topics who wanna bring up? Things that are not on the agenda, mainly from newcomers. All right, nothing, that's good. All right, SDK, we did not have an SDK meeting today because we had no topics, so nothing new there. Okay, so. Oh, seven just shipped. Oh, I'm sorry, I forgot to say that again. The GoLang SDK 070 just shipped. Ooh, cool, excellent. Any questions about that? All right, moving forward then. So hopefully everybody knows we are gonna do a demo for the, for KubeCon in less than two weeks. Excuse me. We only have, as of right now, I think for sure there are two participants, Scott and myself. I think Jude is working on an implementation, I think Veroon slash Oracle, you guys might be working on one as well, I'm not sure, but just a reminder. Yeah, yeah, go ahead. Although I have a lot of questions about it. Okay, well, we do a phone call right after this one where we talk about everything related to KubeCon, whether it's a demo or the presentation. So feel free to join that one. The biggest thing is for everybody else, if you want your company to be part of this demo, you've gotta start coding now because this one is bigger than the previous ones. So it'll take some time to get it just right. So don't wait for the last minute if you wanna join. Just a reminder, you've all been warned. So let's see what else. Okay, so KubeCon itself, as I mentioned, we do have a meeting right after this one to talk about anything related to KubeCon. I know some people have been working on their slides. I could be wrong, but I don't believe anybody's slide deck is actually completed yet. I believe the official deadline was already passed. So technically we're all late, but at least we're all in the same book together. What I'd like to do is to pressure everybody to get their slides done no later than next week's phone call, so next Thursday. So at least this working group can look at the slides and have comments and time for the KubeCon the following week. So everybody on the call who's working on slides, please try to eat your stuff in there sooner rather than later, but no later than like Wednesday night next week if possible. All right, any questions about KubeCon or the demo that are appropriate for here as opposed to the meeting in one hour? Okay, moving forward then, KubeCon China. Kathy and I talked a little and Kathy is gonna be there. So as of right now, Kathy and I will be doing some presentations. There are two 35 minute ones, one for CloudEvent and one for Serverless. If you are going to be there and you would like to be part of those presentations, just let us know. Otherwise, Kathy and I can handle it. It's only 35 minutes each, but the more the merrier if you guys wanna join. Let's see what else. Okay, so I was supposed to present the CloudEvent status on the CNCF TOC call this Tuesday. Unfortunately, they ran out of time. So I got bumped to next Tuesday's call. However, I did actually finish the presentation. So the link is right here. If you guys wanna look it over and if you have any last minute comments or suggestions for edits, just let me know and I'll try to get them incorporated. A couple people did some reviews. I think Mark was the biggest one who did a review and made some changes based upon that. But so I think it's pretty much good to go. It's not terribly exciting. It's a summary of what we've been doing. So obviously since we did not have the phone call or we didn't talk about CloudEvent, I still don't have any resolution on what three independent end users means yet, but they did agree to talk about that on the call. So hopefully we'll get that soon. All right, any other topics before we jump into PR stuff? All right, cool, moving forward then. Klaus, I believe you're on the call. We'd like to quickly talk to this relatively easy one. Okay. Oh, that was just, I just added this link and the document that describes all the existing documented extensions. And I realized when working on my slides that the link to the data ref extension was missing. So that's all. Okay, very straightforward. Any questions or concerns on this one? All right, I figured that would be easy. Get going out of the way. Thank you Klaus for noticing that. All right, next one. Alan is not on the phone call. So I don't think there's anything really to discuss on this one. For those of you who were not on the call early, Clement City is actually gonna possibly run into Alan next week at a face-to-face meeting. So he will poke him on that to try to get his opinion on that one and possibly then join our phone call next week. So what I'd like to suggest is that we hold off until that time to see if we can get Alan to speak up to see if he can sway our opinion. Otherwise we will close that issue or that PR with no action. Is that okay for waiting one more week then? Is everybody? All right, I can't hear any complaints. So I think I'm not here otherwise I'll forget. All right. Thank you. All right. All right, Heinz is on the call. Excellent, cool. So let's see if we can revisit this issue, which is the event key stuff. Heinz and Clemens, maybe one of you two guys can summarize kind of where we are in the latest round of concerns. And Clemens, maybe then you can then talk about your proposal or how to resolve this going forward. Yeah, I've been listening to Heinz speak on this very entertaining call from a couple of weeks ago. And I think there's, so I agree that Heinz is right that it's superfluous as an artifact if you just look at the message and then decide your positioning strategy from the message. So either passing that as a parameter to the Kafka API or even making a call for the partition choice directly. So basically with the Kafka API you can either pick a key or you can even go and send directly to a partition and the same is true for the API, for instance, that we have for EventOps. So for that, there's no need to have that key inside of the message. And of course, if you have the message and you look at it, you can go and take, you know, you can synthesize a key from any material that's in the message. And so you can basically just pick a natural key that's there without needing to have to extra elements. At the same time, the counter argument that I heard on the call was mostly one of simplicity and one of effectively you have to have that in the Kafka API. So why not have an extension that creates an artificial key? And then the discussion went back and forth for a while. And I think we can actually have both. And that's what the proposal is. And that is the Kafka binding and any binding that's requiring partitioning should mandate that there is some form of a callback that allows the application, that basically goes from the binding, whatever the transport mechanism is, allows the application to go and take a look at the message. The application will know what the schema is. It will have full visibility into the message itself and then ask that callback, basically ask the message to ask the application to either create a partition key or we could also have a mechanism that allows that callback to determine the partition outright. That's really up to how the Kafka binding or a future potentially event up binding might want to do that. But then I don't think it hurts to have the partition key as an extension as we have it right now, which is effectively becoming the default choice for that mechanism if present. So you can add the partition key if you want to using that extension. The Kafka binding, if you don't register the callback, the Kafka binding will then go and look for that partition key extension. And if that's not present, it will just make up a random key and then it gets effectively randomly distributed into partition. So I think we can have both and it doesn't hurt to have that, to have the partition key mechanism. All right, Jim, your hand is up and I'd like to pick on Heinz if that's okay, Heinz. No problem. Jim, you're first. Just a quick one. That sounds really good. I haven't read this. I just want to understand who wins. Is it the provider that's plugged into the SDK or the extension in the cloud event? I would say the callback overrides. So effectively, when you have the callback, the callback has the option to go and take a look at the extension and that's kind of the default implementation. But then whatever the callback returns as the partition key or partition ID if you want to have this too, is then what actually counts. Cool, okay. And we would make this what a suggested pattern for any transport that wants partitioning support, yeah. Yeah, that's all. Okay, thanks, cool. Okay, so Heinz, I'd like to get your take on all of this. Yeah, sure. So again, part of it is more of a philosophical issue as opposed to some hard and fast rule here where normally when you have these separations, you try and keep them clean. So from my perspective, the API, sorry, the definition of the specification for what an event should look like should not have direct dependencies necessarily on the binding or the API. I agree, it might make it easier to build the binding or the API, but it opens the floodgates then of I wanna make my binding, if I start to allow third parties to have their own supported bindings, if they have large environments such as maybe an IBM MQ environment where they might say, well, I want things put into that standard as well, which really are not part of the event. They're really part of either the transport, which the binding takes care of, or the API. And in fact, the point that I was trying to make is very similar to what Clemens had mentioned, which is I would actually see this maybe not even at the binding, but almost at the API level where you should have some kind of pre and post processor whenever you receive a message or wherever you send one for exactly that reason, there are all kinds of things I may want to do based on that event data, and that includes the payload data within that event that may be binder or transport or API specific, and it might be better to implement them there just to keep the layers pure and clean. I don't know if that makes sense. It certainly does make sense to me, and so that's effectively the mechanism that I'm proposing. It's an API level callback mechanism and it's mandated, which then yields control back to the application, then say, okay, so you give me that message, now go and pick out what the right partition key is because it turns out I need to have a partition key for this transport, so that's in that spirit. The reason why I think it's not hurting to have the partition key as a generic extension is that the problem is not constrained to Kafka, and there are plenty of reasons, and so for instance, we have in service bus, and I've been in Azure, we also have a partition key concept for queues, because we have queues that for reliability and throughput reasons, we go and shard the content across up to 16 sub queues, and to make sure that all the messages stay together that should belong together, and for you to have a little bit of control, you can also set a partition key, basically just means that they are all the messages that you think belong together by some means are stuck into the same partitions so that they preserve order. So it's not a pattern that's exclusive to Kafka, but it's basically everything that has some kind of a partitioning model needs to have a hint, and that hint is something that the application cares about, and that's why I think for the case where you know that you are dealing with a partitioned entity that you sent to, so basically you want to formulate a cloud event, you want to use the standard SDK, and then you send to the particular entity using Kafka topic or to the service bus queue, you want to use, you want to send to that using our SDKs, it would be very convenient for you to have this common mechanism to kind of from the application perspective, give that hint, and then how that hint is really applied is then the function of the binding, so that you could always go and overwrite you can set it, but then if your partitioning strategy is different or if it even changes, then the binding has the opportunity to go and say, okay, that's fine that you give me that hint, but I'm actually gonna do that differently now. So that's the idea. I don't think that partition key hurts because we also have that in our APIs. So Topini, I think your hand was up first. Yeah, sorry. I wanted to point out that yes, it does hide what you said, it does make sense that, for example, individual transfer binding shouldn't, they shouldn't be able to demand particular things from the overall main spec, but this is not in the overall main spec, this is an extension. And I think the best way to, or one example of why that as an extension is useful is that if you were to send your cloud event, for example, over HTTP to some entity that you know will partition it later on, if you only have a callback mechanism in the SDK or API or something, you would need a third party extension to indicate that within the cloud events over the HTTP binding, and then why would you not have a well-known extension for that, which would be standard? That's just my point of this being an extension which doesn't mandate what the SDKs do, it's the individual bindings could just as well specify their own third party extensions, Kafka, the Kafka binding could specify another third party extension, which it uses for this, but I think it's better because it's such a common concept as Clemens pointed out to have it as a generic, well-known extension. Okay, Hines, I think your hands next. I agree from the perspective of an extension, absolutely, but again, it goes back to this concept of the different layers with this separation of church and state. And again, I believe that having that is kind of a redundant where if I am gonna create it, and regardless of it's Kafka, if I'm creating that key to do this partitioning, that key is going to be based on something that's in the data that I already have. If that is the case, then if I already have that data captured in the event when it's passed to the layer below, my plugin or pre or post processing, however you wanna implement it or even have it as part of the binding will also have that same data. So why do I have to create it before I send it where it's part of that same data? It's kind of like if you look at the JMS spec, one thing I've always hated is the fact that if I wanted to do a selector, I had to take data that was already in the payload of my message and duplicate that in the header just to be able to do the selector because that's how they decided to implement the binding layer a little farther down. And I'm trying to avoid that where again, things will start to sneak in where if it's in the data, if it's created from the data, it should be done at a lower layer. If it's mandatory for a lower layer to work, then I could see that if it's an extension that you want to do as a, almost like a user specific extension where you wanna do it for your application, does it need to be a documented extension? Make sense? So my hands up first then to be able to go to you. So a clarifying question for them for you, Heinz. That lower layer though, that's gonna extract the data or the bits from the data to generate the key, it would then have to understand the payload, right? It would have to understand the data, the format, the schema, whatever you wanna call it to be able to extract the appropriate data. Or the other option is, I think what Clemens was suggesting is where there's a callback mechanism, right? So in my mind, whether there is a callback mechanism so that transport there can be independent of the data or whether the application gives you the key up front through some extension, is there really much of a difference in your mind? Well, you're adding something into the event definition for processing at the next level and I agree it makes it easier to process at the next level but that also means now rather than doing it once in the binding layer, you're gonna have to do it for every message that you send using that event. And that's really not that efficient and you're putting dependencies now into your application as opposed to extracting them at the binding layer. So for example, if I'm generating a key for example, a Kafka partition, that key is based on something in the data. I mean, it's not just some random number or a sequence or something and it can also be quite complex because Kafka has done a great job of making that key an object. So it could actually be multiple pieces of data. So if it's being done over and over the same time, it should be done at the transport or the callback where I plug it in once as opposed to doing it in every event message, sticking it into the event spec somewhere in the event specification to pass it to the lower layer. So, it becomes inefficient. You're passing lower level processing back into the event and you're kind of destroying the shielding. It's kind of like the concept of something like Spring Cloud Streams where I completely abstract it out. Everything is done on annotations and configuration files that all get put together at runtime as opposed to I have to run it in every application. But if you do have to implement it, I totally agree it should be an extension as opposed to part of the spec. Okay, Topini, I think you're next. Yeah, sure. I agree on abstract level, what you're talking about Heinz, but I disagree about that being the only viewpoint or the only usage viewpoint for cloud events as a spec. If you are implementing a higher level API that handles events and you know you will be partitioning those later on and your users know you will be partitioning those later on, there is no generic callback mechanism we can specify. That would work because if you get those events over HTTP, we're not gonna start specifying HTTP callbacks in our spec. So the situation then becomes that if you're building a higher level API that takes cloud events and then knows it will be partitioning those, you will have to define your own mechanism of getting that key because you cannot let the consumer decide it in the callback because they don't have access to that callback. And that might well be a use case for cloud events and that's why I disagree that it isn't useful to document it in a well-known extension. Okay, I think, Evan, I think you're next. I guess the one thing that's not clear to me here and I think we've de-asked around this a little in the discussion is the partition key is basically the producer having an opinion about how this maps in terms of a grouping to the destination. When we're talking about extracting attributes from the payload, that sounds like it's closer to the consumer deciding how the partitioning happens and potentially different consumers might make different partitioning choices. So if we think that it's likely that different consumers will end up wanting to partition different ways, I know Clemens has some good examples from IoT spaces of type of sensor, building, and so forth. That you might want to partition differently than it's like having this documented probably doesn't match most consumers' needs. The callback idea was also on the consumer side, sorry, on the publisher side. So the publisher writes an event and then the callback is also basically, you don't know where that event goes as you produce it and then you send that off to one of the chosen transports that is configured underneath you and if the chosen transport happens to be Kafka, there's a callback hook that comes back up into your app and then that says, okay, you give me this event now give me partition key. That was the idea. So the consumer is not in play here. Yeah, I mean, it's consumer specific how you want to partition or the consumer does care about how you partition your events but the consumer unfortunately cannot decide that and it makes no sense to specify an extension that does that because no transport we know of does that because it's very efficient or useful. It's out of band communication that you need for that. But this, so that the implementation of that callback would be effectively specific to that particular Kafka topic that you're sending to. So you have the particular Kafka topic has a particular partitioning strategy and you're writing a callback function that serves that particular partitioning strategy looking back at that message. That's the idea of that callback. Okay, so I'm trying to figure out where we are here because I'm not sure I'm hearing anything that's a whole lot new from people and I don't want to keep circling. Hines, let me pick on you. Okay. So I gained the sense that you clearly don't, if it was solely up to you you clearly would not put this in there. I get that, but what I'm trying to figure out is whether you could live with this because what I'm, and I'm not an expert in this space but what I'm struggling with is that I'm hearing from the people who created the Kafka binding that they are blocked without this. And that's the biggest thing that's running to my mind in terms of whether we need this or not and should put it in there at least to satisfy them because they seem to believe they actually fully need this to make progress. I'm trying to resolve that with all the good points you've made about it's not necessary, right? So I'm trying to resolve those too. They seem to be in conflict. Well, unfortunately you have me at a disadvantage because I haven't really looked at the Kafka binding that was implemented. However, having written several connectors against Kafka in and out of other third party messaging systems this was a topic that you've seen come up quite often. And we've usually figured out how to solve the problem without having to front load dependencies into the other third parties. And you actually implement those at the binding layer which in this case was a connector layer. So it almost sounds like we've kind of painted ourselves into a corner and now we need to add this in rather than perhaps revisiting the binder. The other part which Clemens made the excellent point of as well is this is coming up for partitions. What's gonna come up when there's other dependencies that would make it easier for bindings or the API definitions without a standard mechanism to plug these things in to solve it at a lower layer. You're gonna just keep coming back to this to make it easier for the underlying layers. And I'm not against making things easy, don't get me wrong but at the same time, if you start to make things easy it opens a floodgates of all kinds of other things that people wanna add. However, if you wanna add it as an extension that's one thing but then the extension question becomes well it's not just the key. I also have to know the topic. I may have to know the key. It may have to be different keys depending on different things. Is it an object? Is it a string? Is it a binary now? Because it, you know, even if you accept that you're going to pass something as a key what is that key because of the flexibility within Kafka in itself is gonna open another can of worms as well. Does that make sense? Kind of like Clemens, you raise your hand. Do you wanna respond to that? No, the hand was left up. Oh, darn. Okay. So just to be really clear that the Kafka binding that we're talking about is just a specification at our proposal for spec. It's not actually a formal spec in our working group yet and it's definitely not implemented as far as I know. So it's not like it's set in stone. Tapini, I think your hand's up. Yeah, I just wanna point out that although this does allow those that binding and other stuff to move forward and I think it works well as a lowest command and nominator to the flat case you were talking about, Heinz, which I agree about we shouldn't document every single thing everybody wants as a well-known extension. They should define their own extensions but I think this is widespread enough. But I do think that we should implement the lower level hooks or whatever you were talking about. Callbacks, Heinz and Clemens have been suggesting because I think that is also important for other kinds of things in bindings. And yeah, I think that definitely should be added as an issue. I'm sure understood what you're saying, That's been it. I sound like you're saying you definitely think that the Kafka PR, spec PR should be augmented to include a callback, right? I know I'm saying they should unblock them but we should be talking about what that low level callback looks like that Heinz and Clemens have suggested. Okay, so you are in favor of this PR as a stance? Yes. Right, okay. So Heinz, let me ask another question. Would it be horrible in your mind if we were to accept this as an extension for right now with the assumption that we can revisit this later based upon the next round of PRs which is gonna be the Kafka PR and see whether you can maybe convince them that they did not need this after all and at that point we can always remove it since it's just an extension? Yeah, absolutely. I'm just bringing up the issues at this point to hopefully make sure they don't come up later on. The fact that it is an extension again is not such a bad thing. But if we're going to do it, I'm just looking at maybe something a little more generic that can be reused downstream by things such as callbacks or dynamic loaded classes and these kind of things as opposed to doing it specific for just Kafka. So, but again, I'm opposed to it but I'm not going to go apoplectic if it continues farther on. Well, that's good. We don't want to send you a seizure. Exactly. Yeah, one of the things we always did talk about in the past when it comes to extensions is that it is not just a way for people to go beyond what we define the spec for their own spec that they're going to need but also as a bit of a playground to see whether something that sounds like an interesting idea actually is worthy enough to maybe at some point make it into the spec or worthy enough to be a common thing that enough people in the community want even though the spec itself doesn't have it and this kind of feels like it's in that experimental stage. That's why I'm kind of inclined to maybe push us a little to say let's add it since it is just an extension, which means it's even more optional than optional stuff in the spec just to see what happens with the Kafka binding and that way we can always revisit it later. But, Jen, I see your hand is up. Yeah, and I'm probably going to vehemently agree with everybody else. I mean, the extension is a nice to have but I don't think it should stop the Kafka binding from moving forward. I think what we're really saying is that as Clements described, what we're really saying is that if you're writing an SDK in some situations you're going to have to call back into an application to get some extra information whether that's a partitioning key or a destination topic on whatever your transport is or whatever. So I just want to make sure that the people that are working on the Kafka spec don't require this extension to be present for their transport binding to work. That's my only comment. Okay, I think we may have to go back a circle with the round of them and see if that is something they could live with or not. So with that in mind, let me put forward a proposal and see what happens. So my proposal would be to, one, open up another issue to add more examples to here because I think several people including Topinius and Kat, they have asked more examples except I don't think that should like block the PR from going in. Two, an action item to open an issue so we revisit whether this extension is still actually necessary after we resolve the Kafka PR. And then three, ask the guys who are right in the Kafka binding PR to strongly consider whether they really need this or not and Heinz would like you to join in that conversation. Yes, no worries. So what do people think about that? Let me actually write it down while I'm sick. That sounds very good, Doug. Okay, I'm glad you like that. Let's see if I can actually remember what I said. Sounds good, but I also wanted to point out there was a great thing you said there or someone said we should strongly suggest to binding and SDK and whatever authors to not require extensions. Yeah, that was a bit of a jamb that made that, yeah. Oh yeah, jamb or someone, but yeah, I think that's a great statement that should be in the primary or somewhere. Don't require extensions for your thing to work unless it's actually like must have. Okay, so I'm gonna pick on Heinz here. I'm gonna pick any other ones. Oops. So this is basically the proposal, these three lines right here. In addition to accepting the PR, what do people think about that? If you guys can read that, sometimes the highlighting is hard to read. Jame, I see your hand up is old. Oops. Okay, that's fine, just checking. Any comments, questions about the proposal? Okay, is there any objection to moving forward with this proposal? So to be clear, this will accept this PR as it stands right now. And then we'll open up these other action items. Going once, I appreciate you guys' efforts on this and especially the flexibility over here. All right. Why is this here? That's just weird. Okay, another exciting one. Quotes for attributes in HCP, in particular strings. So China where we left off on this one. Scott, do you mind if I pick on you to try to summarize where we might be on this one? I think the last comment is something about the HTTP binding transport or the binding spec to not wag the entire cloud events spec. That's right, a wag the dog comment, that was good. Okay, however, while the comment is funny, I'm not sure where that leaves us in terms of proposals going forward. Jim, go ahead. I'll speak up since I wrote that sarcastic comment. I guess where I was coming from was it just seemed a bit weird to me that we had a transport binding that worked as far as I was aware, but because it didn't follow sort of convention of HTTP that we were using that as a rationale to then go back and change the type system and then subsequently, I guess, change all the other bindings to comply with that. So I'm not adverse to changing the type system if that's what we wanna do, but I think that would be a better first step and then change all the bindings to match that type system. Rather than the other way around. That's really what I was trying to drive up. Hopefully people, you know, so Scott don't take that the wrong way, but you keep my chip there. So you could pretend I did it the first way where I changed the type system and then I changed the bindings. Oh, just to put it clear to Jim, you are correct that technically the HTTP binding spec that we had technically worked, the problem is no one actually implemented it. Everybody's pretty much ignored it. And that's why we came to this problem. No, I get that, but I mean, is that a spec issue or an implementation issue? I guess, yeah, okay, but that aside, are we really voting here on saying, putting a restriction that all context attributes must be strings with some, you know, defined format and doing away with numbers and anything like that. That's the bottom line. Clements, I heard you sign back there. Did you want to say something? Well, as I wrote, the idea was to basically go and use Jason's type system, which means the way out how it encodes types, which yield strings and then do the encoding that way, which works for HTTP and then works for other transfers that are constrained to strings in their headers. And so that was the basic idea. And since a string in Jason is encoded with quotes, you can't just omit the quotes and then claim that you still have Jason, which means you can either say, okay, we are not going to encode in Jason, we're going to encode strings. And then everything that is a string will have to go and further subdivide. And then if we want to have integers, we basically have to go and refer to a rule, which might be the Jason rule for how to go and encode that integer. And we'll be losing, but with that, we'll be losing inference because our signal for whether something is a string, that's the leading quote, that's no longer there, which means you can't tell 41751 as a number from 41751 as a string. And so there's a, we can go and restructure this, and I don't think it's gonna be, I mean, it's work to go and restructure this. So, but effectively what that means is that we have to change the type system such that if we care about having numerals, if we care about having timestamps, if we care about having your eyes, we need to retain some sort of a type system because we want to have common definitions for this. Certainly we don't wanna have for, in all the places where we have your eyes, we don't wanna repeat that encoding. So there's gotta be a common place, you have to go and point to that. And so now the question is whether we really need to have our own type system rather than borrowing the one from Jason. So for me personally, that's more an implementation issue than anything else and one that is more cosmetic than anything else. But this is one thing where I'm like, I'm not willing to have to fight a hard fight. I'm okay with wherever that falls in the majority opinion. I just find Jason, rather than building our own parsers and our own encoders, I find Jason being pretty convenient. Hi, my hands up. So the way I kind of looked at this and I put this into my comments here, I kind of view this as two different options in front of us. And I know this may feel like letting HTTP tail wag our dog. But unfortunately I do think HTTP is a huge player in all this and as odd as it may be, I'm okay letting it kind of wag us a little. I kind of view this as two options. One is make everything a string. We for a long time have talked about how cloud events should be minimal things to add to the events. This isn't meant to duplicate everything that's in the data stuff. It's just meant to help route things or get us to a destination. And we always talked about keeping it small, lightweight. And I think forcing everything to be a string would help ensure that happens. Because when you have a full-blown type system then it kind of encourages people to put possibly lots more data in there than they would maybe normally. I'm not sure, maybe. Plus the fact that up till now pretty much everything we define is a string except for I think one attribute, which is an integer, tells me that maybe we wouldn't be losing a whole lot if we did to say everything's a string and just be done with it. But that's one option. The other option, which I honestly don't know how I feel about is basically what Jim was suggesting, which is keep the type system, but then encode the type into the attribute for HTTP. So for example, for an integer it might be CE-I dash attribute name or CE-S for string. That way we keep the type system, but we don't necessarily have to use JSON to do the encoding. We can do what may be more natural, right? So integers appear as just straight integer, strings appear as strings without quotes, that kind of stuff. I kind of view those as the two options in front of us. I have a slight preference just going for strings because I like keeping things simple, but I'd love to hear what other people think in terms of options they either want to offer up as alternatives or even if they say keep what we have right now and force every way to do the JSON coding as we currently have in spec. What do people think? I love my suggestion, but that probably doesn't count. No, it does. It was an interesting one. Like I said, for me personally, I go back and forth on it. There's part of me that says it's elegant, it's easy. It gives everything people need to know. The other part of me says it's a little bit odd to have a type in there, but at least it's a single character with a dash. It's not that big a deal, right? But you can't tell the difference between an extension I and the type. Well, we would have to require an HTTP anyway to always have a type in there. So it would always be two dashes, right? Yeah, or whatever you ended up doing. Yeah, I mean, that was just, I was just trying to move the definition of the type somewhere else. That was really my intention. Do we have this problem with JSON? Can you elaborate a little? A unknown extension that is a type timestamp. How do you know that it is not just a string? Despect does not define a type called timestamp. Right. What is this? Yeah, everything. It's a string with a format. Yeah, timestamps are just strings, but the specification of that attribute says it has to be in this particular format, but it's typed in some way. Let's say we encode it by AMQP. Yeah, then it will be a timestamp. No, it won't be a timestamp because you don't know that it was defined as a timestamp because it's an extension you haven't seen. So the, well, that depends which SDK world you're in. So in the C sharp SDK, I allow for typed extensions. And if you use a daytime and use AMQP, that actually maps straight to AMQP daytime. So let's imagine a system with three components, a sender that's defined the extension, a router, which is unfamiliar with the extension, but receives the content by JSON and sends it by AMQP, and then a consumer at the end that receives the data by AMQP. So the router doesn't know about the extension, but the sender and the recipient do. So, and the sender is sending by JSON. The router sees an attribute of an extension attribute whose type it doesn't know, which has some string contents, which if parsed could be a timestamp. It now needs to send the correct type onto the destination, which could be string or could be timestamp and it doesn't know which because it was built before the extension was defined. Well, basically, Evan, I think what you're suggesting that is when all this is said and done, unless we go with everything's just a string, we may need to add a timestamp type of, a timestamp type. Including to JSON, JSON has this problem. Yeah, so JSON has this particular problem. That's right. Because JSON has a pretty sucky type of subsystem. And URI reference has the same problem, too. URI reference and timestamp are both strings that mean something deeper and JSON isn't encoding the difference between string and timestamp or string in URI reference. I think the actual problem here is even allowing you to use something transport specific data types that we don't even have the spec because that does preg into operability. If you don't know about the extension, you'll be why- Define timestamp in the spec. Well, I don't think we can define everything that you would have in different findings. So even though that will make it convenience to make it all the string, I can see the needs since message selectors already came up. If we're thinking having an HTTP, let me talk in compute for a second. If we're thinking about having an HTTP network where events are being routed through multiple hops and you have an unknown extension and the extension needs to carry a date, then if you have a message selector which wants to go and make a range comparison around that date, well, that needs to be shown up as a date. If we say, well, for interoperability purposes, everything needs to be a string, then you're effectively precluding in the MQP case that you can go and route that metadata so that the MQP message selector can make sense of it. So in that particular case, Clemens, if it's an unknown extension, how would a MQP know to convert it to a timestamp if it's an extension? So if all you do is MQP, so I'm just, for the moment, that is my world. I start a current cloud event at an extension and I sent that cloud event, map it such that the extension property shows up in the application properties. In the application properties, the value of that property is timestamp typed. So MQP has a type system and knows timestamp. So as long as you're in the MQP lands, you will preserve the type identity there. As soon as you break out and you go to a lesser type system where we have the problem with JSON, then you don't. Right, so what do you do when you're in a world where it appears on the wire as a string but it's coming into MQP at some point and you want to do that filtering that you're talking about? Can you ever make the assumption that it's a timestamp or do you always have to treat it as a string because you just don't know? Well, it may be coming over the wire using MQP. Well, Naiman, I'm not very far from you saying correctly. It's something before you. All it knows is that it's a string, right? When can you make the assumption that it's a timestamp or can you never? Because it seems to me that you can never assume that the timestamp of all you know is that it's a string. I could infer it. Okay, because what I'm kind of wondering is whether... I can do a try convert and if I can convert it to a timestamp, then I will treat it as a timestamp. That's one way of doing it. Basically saying if it's a well-formed ISO 86.01 period, or a date, then it's a timestamp and otherwise it's a string. Well, that gets to something that I was wondering about is whether at the cloud event spec level, we could keep everything as a string, but then let the implementations of the spec or SDKs or whatever sits on top of the spec make those decisions for itself, right? If it knows certain types aren't just strings, let it convert it and expose it to their users as something other than strings. But at the spec cloud event level, everything can be a string. Is that something that's even possible or is that too confusing? Because we need more clarity. Again, all the... If you want to take events and if you want to use middleware routing expressions on them in AimQP network or in AimQP broker or whatever broker that may be, they're always broken out into properties. That's also true for the old AimQP and that's true for MQ and for all the other ones. Message brokers generally have type properties. Then having all of those as string basically means that everything, you have to go now parse ISO 86.01 dates in your expressions and you can't do that. Okay, Evan, was your hand old or is that new? Old, I think. Okay. I'm gonna write this up in the issue anyway because I think it's worth recording there. Yeah, I agree. Thank you very much. Yeah. So I don't have a... So I have no good solution in my head. I understand the simplicity of making everything a string. Yeah. At the same time, I'm worried about scenarios where you really want to go and take a look at a cloud event and all of its custom extension properties for an application and those have promoted properties that are promoted. An extension is effectively promoted property where you have, if you look at it from an application perspective, right? And you have an app, the app goes and creates a payload for its event and the event is relatively complicated. And of course, what the middleware generally can't do because of complexity, because of compute workload, et cetera, is know all the potential data structures, payload structures. And sometimes it just can't crack them because there are encrypted and et cetera, et cetera. So what you're doing instead, you're taking some elements out of your payload and you're promoting them out to properties, which we call extensions, but the kind of application specific. And then if you want to go and have the middleware, then do some routing rules based on those, then strings become much more difficult than numerals or dates. Like, so filtering by numbers, partitioning by numbers is much easier than doing that by strings, we need to do string manipulations. The string becomes a pretty unwieldy thing for filtering rules. Okay, we're running a little long time here. So let me just get Jim and Topini then we have to end it here. So Jim, I think you were first. So I agree with the assertion that if you put everything as a string, then you actually have other problems. I sort of rail against any sort of type inference going on, especially if stuff is going to move between transports, anything that might misinterpret stuff along the way sort of scares me a little, which sort of leads me to a horrifying statement. I think you need a richer type system. We need to, if we want to say we want timestamps, then we want them as a first class type. And then the encoding onto the transport has to propagate the type specification with it. I mean, I don't really see any other way around that. And with that, I'll yield the time. Okay, Topini, you might have to be last on this one. Yeah, yeah, really fast. I do agree that things should have been done with an elite at the binding. Well, it shouldn't just convert or infer everything that sounds absolutely horrible. But I do think that the transport should be able to describe its richer type system, but it should also describe how it will format back those fields at the consumer side. And I will write that as a proposal in the PR. Okay. The fun thing is that AMQP probably landed in that place where it is because of these things, because of these concerns that it has its own type system. All right. Okay, with that, I think we're gonna have to stop here. And I just remembered, I forgot to search for that email from James Roper that I mentioned at the beginning of the call. So I just paced it into the chat. You guys are interested in that. But what I'd like to do first is final roll call. If I can find my mouse. Hold on a minute here. So please, yeah, guys, add your comments to that PR because I'd like to see if we can get that one resolved at some point next week. It'd be really, really nice. I heard Jim, Doug, are you on the call? Doug? What about Kathy? Oh, okay. Kathy? Yes, I'm here. Okay, I heard Evan. Victor, are you there? Victor? Victor is here for me. Oh, okay, got you. Thank you. Jude, are you there? I'm here. Excellent. And Vlad? I'm here, hey. And Vladimir? I'm here. Thank you. Anybody? Okay. Thank you guys very much. I apologize for being one minute over. If you have anything to do with the demo or Kubecom presentations, please stick on the call for the next one. I appreciate that. And thank you guys very much. Everybody else is free to leave. Thank you. Thank you, everybody. We could just prefix or suffix the strings. But I'm not quite... Well, that's what, while we were talking about time, I was gonna ask you what your opinion was of Jim's idea, this kind of a thing. Which kind of a thing? I'm highlighting on the screen right now. Basically, put the type into the name somehow. Yeah, that's, I don't like that. But I think there's a, there's, there's these type modifiers in, in C sharp has them. And I think Java has them. C has them where you can go and say and modify a string by having a prefix for it. And you say, this is a long string. And then that's all unicodes. And, or you leave it a normal string and then it's all regular bytes. So we could, I can imagine that we, if we really say we want to have our own, I think having our own type system is something that's difficult to avoid. But if we go and say, and prefix a timestamp string with a T, that's, that's one idea. But that idea is about 30 seconds old. So... But I want to understand what's the difference though between prefix and the value with a T versus putting the T in the name. What's the difference there in your mind? That you can still have, you can still have variants. Oh, Larry, what do you mean by variants? Meaning, meaning if you have a, if you have your own custom, if you have your own custom properties and custom properties may vary by type because you have a dynamic language of sorts. And you're just passing messages around that are being created and once you have a date and then with the same, like you have a key, you have two attributes, right? One is called key, the other one is called value. Or whatever, right? And you think it's a good idea to go and write an application that way to go and include the key and the value in your event because they're referring to whatever that is. With the prefix, you would still have the opportunity to go and put the value type indicator inside of the value itself. It's basically that this is the same as the, there's a constructor bit or byte in several encodings. Like, I think, so AACP does that. And I think protobuf does that too. What is it called? I haven't looked at CBOAR, but I believe CBOAR also has a constructor bit or a constructor byte. Whether you basically have a leader, a leading byte which indicates the type of whatever follows. And that would be a variation of that. Yeah, I think if we headed down that path, you probably run into pretty much the exact same problem that we ran into here, which is a leading byte kind of sounds almost like the coding of a string. Which I'm reading. That's true. Yeah. So, okay. But if we want to have it, so that's the thing. If we want to have, if we want to distinguish types, if we want to have type fidelity, then we'll have to go and be able to tell a string from a date. Yes. That's why I kind of like Jim's proposal. It's ugly in some ways, but it's also elegant in some ways to me. It was like dancing between the two worlds. But anyway. Okay. Where am I going here? To planning doc. Okay. So I'm going to guess as to which I want to take less time and let's do that one first. Presentations. As I mentioned on the previous call, I don't think anybody's completely done with their presentations yet. Am I correct in assuming that it's just a matter of people finding time? Is there any issue that we need to discuss relative to it? Or is it just people just need to sit down and do it? For me, it's just merging the templates, et cetera. So I'm not concerned from a content perspective at all that much. Okay. Have you talked to Vlad at all? No, we haven't. Okay. So I want to make sure you guys know to talk to one another at some point. Make sure you guys blend into each other's talks or something like that. Since you are talking together. My plan was, since I'm going to be saying that, hey, Cloud Events are here, boring, use them. I was going to say that. And here's Clemens to tell you how not to use them. That's not a bad idea. I'm going to share an edit link with you on my OneDrive. And then you can go and open that PowerPoint document and paste your stuff in. And then we need to go and put the right template in. I think Doug, didn't you send, I think he sent me the original PowerPoint thing. Yeah. So I have to get it up from my inbox. And then I'll format that and put that together and I'll give Vlad the link on me or on Slack. Okay. If you can't find the template, let me know and I'll resend it. And when you do get that link, if you can update the presentation link here, I'd appreciate that. So that other people can take a look at it when you get a chance. Yep. Okay. So Greg or Clemens, there's no rush with the link. I still have to get my presentation reviewed by the CSO. So that's going to be taking a while. I do hope to have them done by Thursday or Tuesday at the earliest, but I don't wanna block myself on that. Okay. Good. Yeah. So I already have that stored and you can, I'll set you that link. Perfect. Cool. All right. So Klaus and Scott, now that's Scott's back on the call. Are you guys okay? Is there any issue you guys are working with or is just a matter of finding time to fill in the slide template? I have copied my slides into the presentation you linked. Okay. Cool. So there are two slides where I might ask for feedback next week. One about, I call it guiding principles of cloud events. And I think one is about the new stuff in zero three. So. Happy kid. Okay. My plan was more of a live demo, like ID and so I can share a script that I plan to do. Okay. Yeah. I mean, even if you don't give it, even if it's not a formal slide, I think outlining what you're gonna do in terms of that demo, it would be really good just so other people in the working group can review it and comment. Yeah. I agree. Are you guys still planning on showing the airport demo? I assume so. I thought that was out on the intro. You mean in a class section? What do you mean by out on the intro? I thought the intro session was not gonna have the demo. So where do you guys wanna do the demo then in the 85 minute? Well, we gotta do it somewhere. It's either the deep dive or the intro, one of the two. And I thought at one point we talked about doing in the demo because on a deep dive, they thought they were gonna run out of time. And the intro was maybe a better place to showcase something that's more lightweight like the airport demo. But it's up to you guys. Will you not have time to show the airport demo there? I mean, it's a fairly quick one. No, I think we would. Okay. Now, Doug, I know you were talking about having somebody from Heathrow or Akras, I can't remember which one, with a short little government of video thinking about why this matters to them. Is that still in the cards? Doug, can you have it come off mute? Yes, they're working on it. Okay. Is that how long do you think that is? I'm hoping it's like a couple of minutes kind of a thing. That two minutes is what I told them. Okay. So Scott and Klaus, do you guys think you could work in that short little video from them and a couple of minute thing about the airport demo? Yeah, we could do that at the end and maybe like host a Q&A as the airport demo is happening. Okay. Okay. That works. So which one of you guys is sort of, we'll take the lead on the airport demo, just so I have a name to put here. I don't think Klaus has been much involved in that, so I can't. There you go, that's how it's open. Okay. And this will also include a demo from airport first guys. Okay. Or not demo video. Okay. Cool. Okay. For the serverless working group session of 85 minutes, Scott, I believe you were gonna do a quick intro about state of serverless. Is that still on your plate? I really haven't been able to do anything with this document. It seems like it not much has changed and there's a couple of new vendors in the space, but the line has not moved. I would tend to agree. Would you like to, maybe if you want, what we can do is shorten this down to like a one slide kind of a thing, whether it's you or somebody else we can figure that out later. But mainly jump into the other stuff. For example, I think Christof wanted to talk about some of his stuff and then Jude had something he wanted to talk about. So maybe we could do just a quick one slide or one minute thing about state of serverless. And this could just be more of the state of our working group more than anything else. But keep in mind that we don't have necessarily an audience that's coming back year over year over year. You will have people who are new to this. And so I think not every talk needs to be a delta. True, we could make this a highlight of the working group itself. We could see how we got here. Yeah, well, we could also be like a five minute summary of the talk we're giving at the serverless summit. Yeah, that's fine too. But what I was thinking then is basically keeping a short overview, but then get into these three topics from Christof and Jude, then get into the Q and A and poke on the community to get their feedback on where they want to go next. Does that still sound okay with you guys? So my question for serverless is a bit changed. I'm going to focus a bit more on function rather than just like when serverless functions are appropriate, et cetera. I think that's fine. Yeah, because serverless as a concept is too large to give an opinion, like a blind opinion on. Okay, functions are very, I don't know, but yeah. Okay, well, it's kind of, you would ask for some time to talk about this stuff. So it's kind of up to you to decide what you want to talk about here. I'm looking at Christof's stuff and your stuff as a combination of things. One is a little bit educational based upon the topic you want to sort of share with the group, but then also as a lead-in to get some thought processes going inside the audience members so they could have somebody to talk about during the burden of a feather session, right? Whether it's direct questions of what you talked about or whether it just sparks some ideas on what in the serverless or function space they want us to look at is kind of up to them. So I think almost anything I'll talk about there in my mind is fine. Now, I do have a presentation here that's completely empty, as you can see. What I'd like to do is I'll, later today, I'll just create placeholder sections for you and Christof and I guess for the state of our working group or your working group kind of thing. But then, and I'll send that on note when that's ready, but then I'd like for you guys, if nothing else, to think about the list of leading questions for the BOF session that we can ask the community because obviously the big one is where would you like us to go? But that's just one question. So think about other types of questions that you'd like to use as prompts to get a discussion going with the community members, okay? And then we're gonna add those questions to the slide deck. Yeah, okay. And that's for everybody on the call, not just obviously Jude. Okay, now in terms of the presentation for the serverless summit, the presentation is there. There are some slides, but I don't think it's complete yet. Let's see, Javi is not on the call. Javi and I had some brief back and forth a little, but I'm gonna poke on him to make sure his stuff is done next week as well. So what I'd like to do is remind everybody that try to get your stuff done by Wednesday of next week so that by Thursday, we can tell everybody in the working group to look at the slides, review them and get their comments in because obviously KubeCon is the week after, okay? So shoot for Wednesday at the latest to get your slides into all your decks. All right, anything else relative to presentation is stuff we need to talk about. Does everybody feel comfortable with what they're working on and the flows for their presentations that they're involved in? Well, what about the serverless days thing? Or sorry, the day zero? Have you already talked about that? Did I miss it? Even the one I was just talking about? Yes, okay. Yes. I understand, Scott. You don't listen to me. I get that. I swing heavily between too much and not enough coffee. I get that. Yeah, okay. I will not be there for the serverless day, for the day zero because so I will only be arriving in the afternoon of that day on Monday. That's fine. It's only, what is this? It's like, right now I've scheduled for 45 minutes but Alex has actually asked for a little bit of time shift so I think we're gonna have less than 45 minutes. I think it's gonna be closer to like 35 so I don't expect it to be a whole lot. This is just a summary of everything you guys already know. What have we done in serverless? Yeah. Not a big deal. All right. Going three hours of K-Native in the afternoon. Yeah, that's actually exciting. All right, last chance. Anything relative to the summit or the coupon itself or presentations? All right, let's switch over then to the demo. Clement, since you're in the call, is Microsoft gonna participate? I will try my best to still do something. Okay. And I will be the one coding because the team is loaded. Okay. All right, in that case, anybody else on the call have some questions or comments? I know, Olin, I think you said you might have had a couple. Yeah, so I haven't really been involved much up till now. I was able to join this call only once in the past so I was just hoping for kind of like a 50,000 foot quick overview of like where the integration points are and kind of what that looks like. Okay. Tell you what, since that's a very basic question and not, it's a good one, but it's a very basic question. We can do it offline too. Yeah, what I was gonna say is I have feeling the calls can end really, really quickly so then you and I can stay online and I can walk you through it. How's that? Great. Okay, because I don't want to take up time but there are people who already know that stuff. So aside from the basic question that Olin was asking, are there other issues, questions, concerns people have about the demo itself or is this a matter of finding time to code it up or test it? Scott, are you okay? I'm sorry, my headphones cut out. What was the question? Are you okay with the demo or is there anything you wanna discuss? So I've got it working with the new ID types and all this stuff. I do suffer from a slight cold start crash and come back. That's right. How much time do you need? I know you asked me to increase that and I apologize, I forgot to increase it. How much time do you need? So I need about 42 seconds because of- Oh my God, jumps I do. 42 seconds, that's a lot. It's compounded cold starts. Yeah. Are you booting the entire universe? Yeah. So let me ask. It's four or five layers of functions that react to things. So that's what I'm paying. So let me ask you this. So Scott, for the KUKON demo purposes itself, would it pay you too much if you set min scale to one? I mean, I could do that, but it also recovers. So it's fine. Well, it is, but it makes the demo look kind of funky, right? Because your thing goes away. Yeah, I'd go out of business for one person that whole line and then I come back and then we're good. Yeah, well, it depends, right? It depends on if that's what you want to showcase. Don't lift and shift. Okay, well, let's talk about this one. Cause 45 seconds takes, is an awful long time. I'm glad we have a community of no judgment and safe space. It does, well, it's just funny cause I've given K-data demos and meetups and stuff like that. And there's a period of time when I have to wait like 25 seconds for the bill to happen. And that is the longest 25 seconds of my life because not only do I have to ramble, but then I have to sit there and watch this thing and pray to God that it actually works. Cause you know, after 20 seconds, you never know for sure what's going on unless you actually have some monitor up. And it's a very scary thing. And 45 seconds just, it's even worse. Yeah, that could change in 06 cause we are removing Istio. So the cold start latency is gonna move down to four seconds. And so aggregate like 25 seconds. Yeah, that's better. Yeah, okay. Well, let's talk about this offline. Cause like I said, I personally don't think it'd be horrible if we kind of fudged a little just for the live demo to set men scale the one, but we can talk about that. So on a unrelated topic, I think it might be interesting to have kind of a picture of what transports are in operation in the demo from all the people that are participating. Like the central hub is all AMQP, right? But all of my stuff actually operates off of HTTP and sends back out AMQP. Wait a minute. What are you envisioning relative to change of the demo? That would be hard today. Right, I write my function and because I'm using the cloud events SDK, I don't have to change how, I only have to switch out the transport when I'm testing. So I can run it locally and I can point it at a transport that's using AMQP bindings. I can switch it out in production to the HTTP transport. And it still works the exact same way. And I don't change code, I just change the setup of how the function starts up. So is that something that you'd like to incorporate into the demo? Because that's a little bit more like something that might be worthy of a discussion point as you talk about the demo. It could be, yeah. Because I was thinking is maybe what you could do is put together a slide or two with maybe one slide that just says what the demo is about. Do the demo and then as a follow-up after the demo, talk about in one or two slides, whatever you want, about the implementation aspects like you're referring to here. Yeah. But I mean, so if every integrator also has their own transport that they're picking, like if someone comes in with bridging AMQP to Kafka, operates on Kafka and then sends back out to the bus, that's interesting. Like it's non-trivial to do that today. Yeah. I think if you were to put together a slide or two with some of your thoughts around what you do, that might cause other people who are doing the demo to think about things that they may want to add to the slide like as well. I just noticed something. Yeah, uh-oh. Yes, which is a little sad because RabbitMQ is doing AMQP but it's not doing the right version of AMQP. Yeah. Do we know that? Yes, I told them. Okay. Oh, wait a minute. Sorry, 1.0. Did we change it to 1.0? Isn't that okay now? Yeah. Oh yeah, okay. If you're using 1.0, then that's good. Don't, okay. Don't freak us out, Clemens. But we don't know our strings. So we won't work with Clemens. Whoops. No, no, no, but it's that, that is the thing that like our AMQP binding is not assuming that there's 0.9, it's assuming 1.0. Okay. You should be good. There's this adapter for- Yes, I understand that, but it's like, that's not the thing that people use by default. So great that we do that. Yeah. Klaus got us set straight on that about a week or two ago. Fantastic. That's when I was away. Yeah. I was just, this, it caught my eye. I'm like, oh God, okay, but okay. Yeah, it caused a little bit of pain for us but we got through it. Good. Yeah. All right. Any other questions, comments? Actually, so Klaus, are you going to do an implementation? Yeah, we'll try to. So we have for our functions implementation and the AMQP trigger anyway. So that's what I will try to use. Cool. Yeah. Cause I would love to have used the AMQP-KNL event source. Is that, is that ready? And are you using that, Scott? I wrote my own. Okay. Cause I kind of- I would like to see one. Cause I kind of, I did my own thing under the covers as well, just cause it was easiest. So I have, I have a repository that implemented an AMQP source and sync. So you can, you can host it and then you can bridge events onto the broker, operate on stuff and then send it back out on the sync or on the, yeah, on the sync. The sync takes in events from inside the cluster and pushes them back out to a predetermined transport. Is this something you're going to contribute back to KNAE of at some point? Yeah. So I have a bigger plan using the GO SDK and all the transports and all the, that I wrote something called the, I call a bridge that bridges two transports together. And so it's a, it's a, it's two cloud event clients that's listening on two separate transports and then just shuffling data between them. And it might actually be a pretty interesting first class K native component that does those bridges. I would be very excited about that one. Sounds interesting. And then, you know, like more and more transports becomes very important for the cloud events SDK for, for Golang, because now you get to support every permutation of those two of every transport, which is really cool. Sounds interesting. Cool. Okay. I can share a link to that code. It's, it's open. It's not in need of yet. Yeah, please do. Okay. Any other questions, comments, concerns? Comments. Comments. As the, as the MQPTC co-chair, I am really, really, really excited that all of a sudden without me pushing all that hard MQP becomes popular here. I like that a lot. Who said it was popular? I wouldn't say popular. We're using it because it's something that you're, that you're accommodating it. I really like. Thank you very much. Oh, who do we get to blame? I can't remember. Who was this that suggested rabbit MQ? Was it, was it you, Jude or Vlad? I can't remember who it was. I think it was me. Yeah. I think it was you. So we get to either blame you or thank you. One of the two. Active. You should be a big fan of active MQ Artemis because that's actually a good broker. I've never used it, but I've used Kafka. What? It's actually, no. Artemis, yeah. Probably, yeah, probably active MQ, yeah. Yeah, that's, that's like, like you should, when you, when you're looking for an open source broker to use with a MQP active MQ Artemis is the choice. It's really good. I think that's right. Cool. I used active MQ, is that wrong? That's, no, that's just the older version and the older Coke base. And the new Artemis is based off of Hornet Q, which was an acquisition by Red Hat. And that's just faster and more modern. And that's where they, where most of the work goes. But yeah, active MQ, from an interface perspective, it was the same thing. So one thing that I forgot we should probably talk about is I know Scott and I are both implementing all the various roles, but in reality, for the demo itself, I, well, I guess it depends how many people we actually get signed up. I was assuming that you don't want everybody doing everything, otherwise it's gonna get very, very crowded. I guess we can wait. We don't have to decide right now, but at some point we may need to assign people different roles just so the screen isn't too busy. Because we were kind of assuming- Assign me something. Well, By a slot. Well, we can make, I was thinking either a supplier or a carrier. They can make Microsoft a carrier. But I, but well, okay. Why don't you start off doing a carrier? Then we'll hold off on the rest because I think it depends on how many other people join. Okay. But at least that will give you at least one to start with because that's the bare minimum. Okay. Okay. Okay, any other questions or comments? All right, cool. Thank you guys. In that case, everybody's free to go except for Owen, unless anybody wants to hang on the call and talk to Owen about his questions. All right. All right, thanks guys. Talk to you next week. Bye. All right, Owen. Would you like me to stay, Doug? That's completely up to you. You can correct me where I get things wrong if you want to. Oh, okay. All right, so Owen. How much of the demo do you know or should I start with, you know nothing? Like I said, I was able to join this call once before. So I know, I know a bit, but. Okay, well, I'll start from the very beginning. It's not, it's not huge. So basically, oh, there we go. Scott's doing something. So basically the point of the demo is simulating a airport environment where there's retailers, that's these guys down here, and they're gonna sell coffee to customers and there's the wonderful cult start kicking them out. So we have retailers down here selling coffee. We have suppliers up here who are offering up coffee cups. So as the retailers run out, they will send out an event saying, hey, I need more coffee cups. Suppliers will say, okay, I have a coffee shipment or coffee cup shipment ready to get picked up. The carriers, the little trucks over here will then go to the various suppliers, carrying it to the retailers and then go back home. Okay, so that's the basic flow. The audience members are supposed to participate because what we're gonna do is tell them to go to sourcedog.com slash airport and they should see on their phone this little thing down here that I've seen in the corner. So for example, they'll be able to pick a coffee shop. So IBM coffee and that little guy will walk up to it and they can hit this little jump button and that will jump and stuff like that. Then they can order their coffee, so small in this case and then they'll walk away when they have it. If the guy, if the retailer runs out of coffee, he'll ask for more as I said, but then it'll also put up that little bubble for small appears here, medium appears here, large appears here indicating that he's out of coffee cups. So that gives you an indication of something that's going on. But that's the basic flow from a visual perspective. Now, the way it works under the covers is everything that goes on here is event-driven. So we have a single rabbit MQ deployment that's receiving and sending out events for everything. And what you see on the right hand side here are basically the cloud events that flow back and forth through the system. So let's actually pick one that might be more interesting. So here, so let's take our case. When somebody places an order, this is the cloud event that gets sent. In this particular case, the passengers typically send in the events. We know it's an order because it's order released and it's gonna be going to the retailer called retailer.ibmr, which just happens to be the first retailer down here on the left-hand side. And he's gonna process it. When he's done, he'll send out an event saying the coffee's been delivered and the controller, which is basically the graphical stuff here, watches for all the events and makes things move on the screen accordingly, okay? Got it. So the event, okay, the demo itself. So basically, if you look at the document itself, in case you don't have it, here's a little. I do. Oh, you do, okay, good. So basically look at the event flow in this section. Just walk through it and you'll see all the various events that can flow through the system just once walking through it. And so what you'll see is, initially, all the various participants will register. So let's skip the reset here for a sec. Let's talk down here to a register, right? So the first thing is, for example, a retailer registers. You can see what the event looks like here. It's not too exciting. You have a system name and an organization. The system name is sort of like the machine readable name. That's the name that's gonna appear in all the events and that's the thing you're gonna look for to know whether this event is basically for you. This is the human readable name and that really only appears. It doesn't actually appear. Oh, I know, it appears here. But the real name for retailers and trucks, I don't think they ever appear anywhere. The logo does, though. So you get a little icon here and an icon here. Those will appear because of this logo or URL right here. So once you as a retailer, supplier, carrier, register, what's then gonna happen is at some point later on, the controller will reallocate who's doing what. That way everybody gets a fair shake in the demo itself. So when you registered, you don't say, I do small cups for this particular retailer, the controller will tell you what you're supposed to do. And so it's very important that as these events that we've talked about here flow around, you react to them appropriately and immediately because as new retailers come on board, we don't want two retailers trying to supply cups for the same retailer. That's gonna cause the system to get into a really weird state. So what will happen is, in this particular case, for a supplier, you'll get a data section that says for this particular retailer, here are the coffee cup sizes you support or you'll, yeah, you'll set out boxes for it. You will get the complete list of all retailers that you support. So this chunk right here could technically be duplicated or replicated based upon how many retailers you're gonna be supplying. So you can assume that anytime you get this event, it'll be the complete list. It's not like you're gonna get three events and you have to sort of merge them together because otherwise you can't tell whether it's, oh, it was a merging together or it's a brand new start of a list. So we just said, nope, you're gonna get one event with everything you're supposed to be working on. So if you get a second event of that type, wipe out your memory of that what's going on and start fresh. Makes sense? Okay, so here's the message for the supplier. Here's the message for a carrier, same type of thing. You'll get an array of to and from locations and that's who you're gonna be supplying for. Let's see. When everything is done, the controller sent out a message saying that the passenger has placed an order and this is actually destined for a particular retailer. So this is one of those cases where it's an event that is technically targeted. So it's not really eventing per se, it's more messaging but touches life. We needed some way for the controller to say this passenger ordered something. But then from then on, I think most everything else is kind of eventing for the most part. And you can kind of see the flow here. So the passenger placed an order, retailer deliver the coffee cup. So this is best is actually technically for the controller more than anything else. So we can tell the little passenger guy to walk away. The retailer can also indicate his new inventory level. Technically, these are ignored except, no, actually, technically these are ignored except when it's zero because when it's zero, that's when the controller will put up that little bubble up here to know, for example, that he's out of small coffee cups and see if they can make him do it. Yeah, there we go. So when an event is sent with an inventory level of zero, that's how the controller knows that he's out of coffee and that's when the bubble appears. The big one that you wanna watch for or the quick one you wanna do if you're gonna end up doing a retailer is this one, order released. So that's your way of telling the supplier you're out of coffee and in particular, you're out of small cups. And notice you don't say what supplier's gonna be supplying you. That's for the supplier to pick up the events and recognize that, oh, this retailer's out of small cups. That's what the controller told me to do. I'm gonna react to it. So the way the supplier will react to it is he's gonna send out a notification saying he has a box that needs to get picked up. So potential action status means I need something to get picked up and shipped from IBM S supplier to IBM R retailer and he's happily doing small coffee cups even though I don't think that's really used any place. So the supplier sends out this one, the transport or the carrier is gonna react to it. The carrier is gonna notice that it's the from and to locations that he supports and he's gonna react to that. And he's gonna send that notification saying, okay, action status is now, it's now active action status. So he's basically saying, yep, I'm gonna pick up that box. That's the signal for the controller to change the screen so that he sends a little truck on its way right to the supplier. So the truck will go from here up to the supplier, down to the retailer and then back home when he's done. Okay, so that's the important message for the controller to know what's going on. So then eventually the controller is gonna tell the UI that his truck has actually moved to the retailer. So because technically in the real world, right, the truck will tell the retailer, I'm sorry, the truck will tell the carrier when he's arrived. But because the UI is a control of things, we've made the controller do this. So the controller is basically telling the carrier, yep, your truck has arrived. So now the carrier can send a notification saying, action completed, okay. I don't know if anything really happens here other than it's just sort of letting the system know that everything's there. Oh, I guess that's a notification for the retailer to know that the package that's been delivered. So now the retailer can increase his inventory level to two. And so he sends that notification just so the world can know who they're watching, that he now has two cups and he's ready to go and can service more customers. The current process we have right now, I think is that everybody assumes that there's two cups per box, even though it's a very small level. If you increase it, you don't necessarily get trucks flying around the screen as often as we may want. Sure. But that's just obviously something we can easily change. Anyway, I ran a little out there, let me pause and see where you have questions. Okay, now I think that all makes sense at a high level. Okay. So is, as far as like the different implementations all kind of working together, is that all just via RabbitMQ? Like everybody's just... Yeah. Yeah, basically everybody's works and communicates through RabbitMQ. You determine what you do based upon the RabbitMQ message you get and based upon the triggers you see. So for example, you know, here's the, you know, that's not a good one. Yeah, so here as a retailer, these are the things you'd look for inside the actual event to know whether it's meant for you or not and whether you need to react to it. And yeah, and then when you're done, if you need to notify somebody something going on, yep, you send another RabbitMQ message back up. Everything goes through RabbitMQ. So you technically don't even need an external endpoint. As long as you could reach the RabbitMQ server. Got it. Okay, so if I'm implementing this, you know, as a serverless thing on my end that I need to be able to trigger my functions and stuff from RabbitMQ messages. Yeah, you'll need something that pulls events off of the RabbitMQ queue, if that's the right word and passes it on to your function, yes. Cool, that makes sense. And then as far as like testing, is there any kind of recommended story for that or just the version that's up? No, other than just connect up to it and see what blows up. The one thing I forgot to mention is if you get a reset message, which is not targeted to any one person, just an event of type reset. If you see that, completely reset your environment. You should actually get a, you should also actually see a disconnect message, okay? But the point of the reset message is people have been experimenting with stuff and they may, for example, through their own testing, add other roles, right? Add other retailers, suppliers or whatever. And there were times when we get into a really bad state because people send messages when they shouldn't have, okay? And we needed a way to tell everybody, look, not blaming anybody, but something went wrong, reset everything. And so the best thing for you to do is when you see the reset message or a disconnect message is basically just reset your environment, forget about everything you know and re-register. Actually, I'm sorry, I take that back. You should not respond to a reset. I mean, let me think about that. The reason I'm pausing is I'm trying to remember, because I know when the controller sees a reset, he will send out a disconnect to everybody. So it may be better if you do both. If you see a reset or you see a disconnect, you may wanna just reset everything either way. It doesn't do any harm to do it twice, right? So basically, at that point, if you see a reset or a disconnect, go ahead and forget about what you were supposed to be doing from the controller's perspective and then go ahead and re-register with the system and you'll get automatically re-added and job assignments will get discounted accordingly. I'll have to go back and double-check on the code, but I'm pretty sure if you react to both of them, you're okay. And is registering an important effectively? I believe so. I'll double, here, actually, I can test it right now. Hold on, let me go back to my screen. Let me add another retailer and see what happens when I try to add them twice. So there's another retailer I just added and let me add them again with the exact same name. Okay, so nothing happened. Yeah, so you can see, here's the retailer that I added. R1, and then if I click the first one, you can see he added again, same R1. So the controller does do some checking on that. He won't put duplicates in there. So yes, it is important. Cool, okay, yeah, so it seems like there definitely should be no problem with watching for both reset and disconnect then. Yeah, that should be okay, I think you're good. Let me go ahead and kill him and watch everything adjust. There you go, cool. All right, let's see. Eight, next question. I think I have a handle on the basics at least. I'm sure I'll have more questions as I kind of get into the actual implementation. Yeah, are you in the Slack channel? Yeah, I just got in it, so I got a question there. Okay, yeah, obviously please do not hesitate because the last thing I want to happen is you feel like you're rushing at the last minute to get this stuff done, so most of us are hanging out there, so if you have a question, just paste your question onto the Slack and we'll get an answer to you as soon as we can. Great, thank you so much and thanks for taking the time to kind of walk me through all of this. Sure, not a problem. Okay, Doug, I see you're still on the call. Is there anything you want to mention or anything I forgot or anything you want to bring up? Nope, okay, nope. Sorry, I didn't think it. I think the demo, the dashboard looks great. When you start getting traffic going through there, you see how all orchestrates, it's pretty cool. Yeah, have you seen it when Scott does his test client, and he bombards the system with customers? Yeah, no, I saw it at the beginning. Yeah, it was really cool. Yeah, it's really kind of scary because we started doing some really weird stuff for a while there, but I think it's a good test of the system, especially when we start having the audience involved in there, because you didn't get 100 people signed up for this thing, I'm glad that he did that test client. It's a good little testing for us. Yeah, I'm excited to see how it all ends up. I mean, I hope you have a few more participating that were on this call, so that would be good. I expect we'll get at least, I think Clements will come through. I think Jude will have something. I think Klaus will have something. And I expect at least one or two other people to show up in the last minute in a panic. So I think we'll be good. I think we'll get good participation. All right, anything else you guys want to talk about? Nothing for me. All right, cool. And that's a secret done. And thank you so much. Okay, bye.