 Parker, Joe, are you there? Jeff Sherman? All right, we'll catch back up with you guys later. Let's go ahead and get started. Let's see, SDK, no, okay. Community time. Are there any community-related topics people would like to bring up? Things that aren't on the normal agenda. All right, not hearing any, moving forward. All right, SDK work. Actually, we had a meeting right before this one and there are two things that popped up that I think are worthy of mentioning here. One is, we started the discussion of having some sort of SDK event thingy at KUKA on EU, whether it's a demo, some sort of interop events, not quite sure yet, it's still sort of in its infancy, but I just wanted to bring it to your guys' attention that we are gonna be sharing some discussions on the SDK Slack channel about what we wanna do there. So if you guys have an SDK or if you're involved in one of the SDKs, please keep an eye out for that, because we don't want someone to miss out an opportunity to participate in that. And then related to that was there's a whole discussion nearly different by the work that's going on in the Go SDK around versioning, in particular, what version of the spec should people support going forward? So for example, the Go SDK is currently trying to support all three versions of the spec and also trying to keep up with master as it goes along. So we're gonna start having discussions on the SDK Slack channel about what are the expectations from the other SDKs as well. And obviously that's probably gonna be very much related to what we choose to do at KubeCon U around this event. So just FYI for these discussions that are gonna happen on the SDK Slack channel. Mark or Scott, is there anything else you guys wanted to add to that that you may have forgotten? Okay. No, I think that's good. Okay. Not hearing any. Moving forward then, demo proposal, Scott, would you like to bring the team up to speed on what's going on there? Yeah, so the other Doug, I forget it's... It starts with an M, that's all I remember. Yeah, so there's interest in this real world example. In general, it's some sort of pipeline where there's producers and then there's like some sort of inner routing piece and then there's consumers of events. And my original demo proposal was the simulation of an e-commerce pipeline. And then Doug brought up, there's this partnership that he's been working with and through other working groups, doing an example, what is it called? A smart city for an airport. So potentially the demo could migrate to be same sort of situation where there's event producers and consumers, but the focus is around how an airport views itself and maybe can react to events that are happening. You could still get that same e-commerce pipeline because there's merchants and shipments and other people buying stuff in an airport. And then the thing could grow to simulate what an airport is eventing at any given time. And then I think the overall goal is that each consumer and producer can use the events to do something interesting. And the airport itself could actually view all of its components as, and then make some sort of determination about the health of the airport. And so that would be like the long-term, if you continue working on the demo, this is what it does. And so each vendor would potentially act as a entity inside the airport doing events and consuming things. So if there's, we think it's interesting and hopefully everyone else thinks it's interesting enough to participate. And Doug, I noticed you have a microphone out. Is there anything you'd like to add to that? No, I think Scott did a great job of explaining it. And it sounds like you actually may be able to get some real-life customers involved in some fashion, like Heathrow Airport and stuff too, which I'm really, really excited about. I thought that was really cool. Yeah, so there's an international airport council that has evolved an information system that extends, from flight systems down to transportation, baggage claim, retail components, it's all under the, they call it ACRUS, A-C-R-I-S, A is for airport. And a few, I don't know, a dozen prominent international airports have implemented that model. Heathrow being kind of the lead on that. And then in the U.S., I think San Francisco and Orlando are behind it. So it's gaining momentum. And so I see this as something that would be mutually beneficial to connect cloud events to ACRUS, so to create awareness of both projects. Yep, sounds really cool. And thank you very much for looping us in with that. This should be really exciting. All right, Doug, go ahead. This is Austin here. What's the project called ACRUS? It's A-C-R-I-S. Doug has a link to a deck that describes the ACRUS airport ecosystem. You could probably get it to you. Yeah, I'll make a note of this, I'll forget. Yeah, I just joined. I joined kind of like at the very tail end of that. But in October of last year, I did a presentation and partnered with Accenture on this. We did a full demonstration of an event-driven airport. Everything from what happens when a flight is delayed, orchestrating all actors, elements within the airport, across the airline to the airport itself in an event-driven way using our event gateway project. And we spent a lot of work, did a lot of research on this. And we had a whole story and we kind of started with just this linear sequence of events and who would respond and how they'd respond. So we have a whole bunch of materials from that. If it's of interest to anyone. Yeah, absolutely. Yeah, that sounds really good. All right, cool. Thank you guys very much. I'm really excited to see how this thing's gonna shape up. And I'm sure it's gonna, the time of the day is gonna come faster than we expect. So we gotta keep moving on it. All right, Kukani Yu. And so I think we're gonna have a phone call. I don't know when the next phone call is. But anyway, I don't think it's really much change there other than I did find out that the serverless practitioner summit is still moving forward. I believe they're planning on having a CFP type of setup. So expect to see some notes about that relatively soon. So I think they're basically looking at it basically a mini summit kind of a thing with the keynotes and then breakout session and stuff like that and people can submit proposals for talks and stuff. Assuming they do that, we probably then need to figure out whether we should have our serverless working group meeting as part of that or still under the normal Kukani thing. I think it might be a little still premature to try to make that kind of decision guess. I don't know exactly what the format is for this other summit. But just thought of bringing up the speed that they are still moving forward with that. It's not, the silence is not needed. It's gone away. All right. Let's see. Okay. So I did ask for a 35 minutes intro and deep dive two separate ones there for Kukani China and asked for an 80 minute long session for serverless to match up what we're doing in Kukani EU. Just let you guys know that I did request that. I believe the couple of papers is open. So if you guys want to submit papers just in general around any topic at all whether it's serverless or not, I just reminded you that the CFP period is open now. And with that, I believe we can start talking about PR. So let me just do a quick, let's check here about one thing. Okay. No more votes. Okay. So Rachel's PR. So last week we started a votes, let me get to it. Oh, did I get, I think I already got you down. Okay. And so last week we took a vote on these four choices. Nothing, just a list of the external specs, include the specs and then include the specs with the TCK. Now, hopefully I did all the voting right. I had to convert the numbers you guys gave me into the format that the tool actually wanted, meaning four columns and with a number in the column but presenting your preferred choice. So for example, for Google, their first choice was number three, which is the specs, their second choice was specs with the TCK and then third choice was just the list. So look at this list. This isn't the attendance doc. If you guys want to take a look at it and verify my stuff, but I think it's right. Later on, I will take this and run it through the official tool but I think if you just look at it, you'll see number one does pop up in column number two most often. And I will double check that through the tool but I think it's pretty obvious that number two, which is just the list is gonna win. So that will be the step going forward. I think the next step here is to ask Rachel, who's not on the call yet, to update her PR based upon that. Does that sound right to everybody in terms of process? Okay, so I will make a note of that in the meeting minutes. Okay, I'm looking forward then. Christoph, how would you like to handle your PR? Do you want to talk about your new proposal first? Do you want to talk about where we're on the old one? How do you want to work that? Yeah, let's talk about the new one first. Okay. So the old one we discussed a couple of times. So basically the main goal is that we have a event, a size of an event and we know that it will, everyone should accept it. That means if I'm as an event producer, I'm creating an event that is below the size, I can be certain that everyone that follows me accepts it unless they have really big reasons not to, because there are some super constrained edge device. But like in the cloud, let's say everyone should accept it after me. So the proposal I gave last time was that just take the events, sterilize it as JSON, even if you don't send it with JSON on the wire. And that was criticized because, yeah, well if you're not sending JSON over the wire, you have to do one serialization in JSON that's sort of pointless. But I'd like to point out that the good part or we are requiring everyone to support JSON anyway, so everybody should have this implemented. But after that critique that we maybe shouldn't force a serialization in JSON, I tried a different way, or here I describe a different way, how to measure the size of any event, independent of any encoding and formatting and so on. So this is what it starts, so these are a few more rules. So there's limit on the number of attributes, on the attribute name length, we basically already have that in the attribute spec itself. And then individual limits on binaries, on strings, which are basically those that are unbound. And then the last bullet point is a, well a size limitation for all attributes together, and then we can discuss what the individual size would be. But the point here is that we measure that independently of the encoding. So for example, an integer would always be four byte, but if you take an integer and encode it in JSON, it will obviously not be four byte because it's represented differently. And if you take a binary, it will also not be exactly that because you have to base 64 encoded, so in JSON the actual value will be bigger. So we don't really know what the size on the wire will be, but still everyone would have to support this. So we can figure out if we still want to kind of, want to be four JSON, want to be below 64 kilobytes, then we kind of have to do the math. I think 40 kilobytes should be fair enough. Yeah, so to summarize in comparison to the proposal I made last time, you don't have to go through the JSON serialization. That's the good part. The other good part is that the limits are much fine granular and we also have then a limit on the number of attributes, for example. The downside is that everyone who wants to, well, follow these rules, they have to implement something new. They should already have DJs in serialization implemented. Yep, I'm happy to take questions. Thank you. Any questions? So I admire your grip with this for one. Is there an expectation then that the SDK would have to sort of check this stuff and report on? I understand the intent behind it. I'm just trying to understand how you would operationalize it if you need to or is it just purely a guidance? Well, I think there will be, let's say there will be some event producers who will simply always be below these limits because they only have a few attributes and their data will not be so big. So for them, it's really a non-concern. And then there are others who may wanna push it. And I think for them, it is really good that we have a guidance that explicitly states what is allowed and what is not allowed. And the same thing is true for middleware. Like as a middleware, I may or basically every messaging middleware has a limit somewhere. So I think we should settle on some things so that every middleware knows what it has to support at a minimum. So I think if I as a middleware now would say, okay, Jason until 256 kilobytes, I just accept that, then you're fine because that's already, that will definitely hold anything according to these rules. So is there an implication here? So you've made a comment that binary should not exceed 1K. Yeah, so if I'm sending an event with a binary payload, binary data through HTTP, that would sort of mean in a binary transmission mode, my payload could never be more than 1K. I excluded the data attribute from that rule. Ah, okay. That's me not reading. Okay, I'm with you then. Okay, thank you. So the idea is to limit the other attributes is that as a middleware, you wanna parse all them or at least potentially parse and look at them. So if we limit it, we also basically what all HTTP service do, they limit the size of the headers. So this is sort of similar so that as a middleware, you kind of have a stopping ground of how much stuff you have to process. And just to give more evidence sort of that I haven't read this entirely. Where is the payload size encapsulated in this? And the last bullet point includes the data attribute. So basically if you have no value, you will have a few other mandatory attributes. But if they're really small, then you can have a larger data attribute. But if you put more stuff into the other attributes, you will have less size for your data. And that's in its encoded form or it's unencoded form? So, well, that's for basically unencoded. For the string, we have to use some encoding. So this is UTF-8. And for the binary, there is no encoding. Then if you go to JSON, you have to base 64 and total binary. That means in JSON, it will be larger than it is as measured here. Okay. Again, thank you for hanging away on this one. No worries. Yes, you've got a lot of patience. All right, any other questions or comments for Kristoff? Nothing, interesting. Okay, so let me ask one then. So this is not my area of expertise in the slightest. So let me ask this question, because I definitely understand the desire for some sort of minimum size. I can understand that from an overall perspective. But relative to spelling out individual things like, what the size of each individual property, how often is that a concern in people's experience versus the entire size being a problem? So I think there were comments made before because theoretically you could jam your entire event into context attributes. You had to sort of take those into account. You couldn't just rely on the data construct. Yeah, but I guess I'm kind of wondering, right? Obviously people are concerned about the size issue for a reason. And I'm trying to figure out where those size limitations come into play in practice. Is it because the transport can only handle 64K? Or is it because the processing engine of these payloads only supports things like attribute names that are only 20 characters max, right? I'm trying to figure out where people's experiences are in the space, because I just don't have it. That's why I'm curious. I believe it was an interop issue. So I know if someone from AWS is on the line, if AWS can support 300K events, but Azure can only do 64, then there's the media impedance mismatch there. So I think it was just to get at the lowest common denominator. Christopher, was that where this all originated from? Basically, yes. So I think for me the point is that I'm an event producer. I produce events and at some point, right, someone can decide to send them to an AWS service, a Azure service, whatever. And my events should go through all of these services. But for that, as an event producer, I kind of have to anticipate what the size limitations of each service will be. And the only place I can look for guidance on that is the spec, I think. So yes, that is exactly the concern if someone is currently on an AWS service and then which supports like 256K and then they switch over to Azure, which only has 64K, then the interop, and I'm sending larger events and then sort of we don't have interop because I cannot simply switch out one for the other. Okay, Mark, were you gonna say something? Oh, I'll just say that I think the maximum size is the most important part of this. But again, we want to encourage people to put the true event data into the data portion, not into the envelope side. So adding some of these limits there as best practices, I think will help enforce that. Okay, right, anybody else wanna comment on this one? So okay, so what do you guys wanna do relative to moving forward here? I guess the first step is I'd like to get a sense from the group, because I don't know how to interpret silence, is it silences you guys don't know, don't have an opinion, like this one as opposed to the other one? I vote for more this one than the other one. Okay, thank you, Jim, for speaking up, I appreciate that. Anybody else care to voice an opinion? I think I have a slight preference for not having to serialize JSON to measure the size. Okay, thank you, Evan. Anybody else? Yeah, so I also don't like having to serialize the JSON, but I also don't quite like how complicated this got and how much it limits individual fields instead of the total event size. I still haven't quite thought through this, but my position is quite the same as Clemens's, which it's a shame he isn't here, but the fact that you want to measure a normative version of the event, it would help producers, but I don't, but I don't think how much it will actually help in practice because if you have an event that is the correct size according to these rules, is there still with encodings and different formats, the chance that it still doesn't fit into the Azure, what is it, message queue? Event grid. Event grid, because you're not measuring the size on the wire, you're measuring on intermediary format. Right, it's about it. If the problem is events fitting onto specific transports, I don't understand how measuring a normative format that doesn't actually measure the end size helps. Okay, so the next step for me is, once we've settled on something, I would basically prepare a TCK or whatever we want to call it with a couple of events that really test out these limits. So then that would be like five or 10 events, and then these become a TCK, and then we can really test that for people can test their consumers that they actually accept those events that we are at the limitations of that. And now we can take those against Azure Event Grid, and then we see if it supports it or not, basically. And everybody else who implement something can do the same for their thing. Okay. Does that make sense? Yeah, that makes sense, I still don't. I'm still 50 to 50 on the basic idea, but that makes sense here. Right, anybody else care to speak up? Hi, this is Vladimir. I have one question regarding the limit for the indexes of maps of 20 characters. I feel the number sounds reasonable, I think for most applications, but I think we may find applications that would like to have the index that is dependent on the particular application use, and this thing could be machine-generated and it could possibly be longer. I'm just concerned that we may get in the same situation like in the old days, where the limit for identifier was eight, which sounded plenty in the fortune days, but as the time progresses, everybody suffers from that. Yeah, we passed this one PR from Clemens, that basically limited the, or is there a should, what attribute name should be 20 characters long max? I wasn't a big fan of that, what I took this number into account here. Why do I want to limit the name length here? Because if they become much longer, then the whole computation for the oval size becomes more complicated, because if they can be like thousands of characters long, then I also have to measure the attribute name length. I can also do that if that's preferred. I see, thanks. And basically all these values are up for discussion if you think it should be something else. I made them more or less up, but they're up for discussion. Okay, anybody else wanna ask a question or voice an opinion? Okay, so in terms of process, I think because this one just came in maybe over the last day or so, I think we may need to let it sit there, let people think about it some more, comment on it, and then Kristoff, I think on next week's call, it might be your, I don't know what they wanna call it, your choice or your responsibility, one of the two, to decide which one you'd like to put forward for the group to consider, if you wanna do either one, obviously you could choose to close both of them if you want. But at some point I think since they're both your PRs, you should probably decide the next step forward in terms of what you want the group to decide, yes or no on, is that sound fair? All right, then I'll ask everyone to leave comments and if there basically are no comments, then I think we're good for a vote next time. Okay. And then if the group prefers this one is what I gathered today, then let's go with this one. One quick question, what about batch processing? What about it? Does the batched events, does it count towards the total or do I get like my batch of five, do I get five X the total limit? You get five X the total limit because like the batch just takes the individual events and batches them together. But it comes in as one single payload. And so if, I think you run into the same problem if that transport can't support some certain size, the thing won't fit anymore. Yeah, but batches are transport defined, are they not? So that transport probably wouldn't support batching anyway. Or it may, like looking at the Azure documentation, their limit is per event, but you can submit up to five, 12 events at the same time. Okay, thank you. There you go. So that raises an interesting question in my mind anyway. Is it clear from the text here that all these rules apply to an individual event? I think it does, because it says right here accept events as opposed to transport thingy, whatever you wanna call it, transport payload. Is it clear that this is just about individual events and that this answer to Scott's question should already be in the text or do we need to add the additional text or make it perfectly clear? Okay, I'm not hearing what you speak of, so I guess it's okay the way it is. Okay, so hopefully people, please leave comments on there. Not hearing anything else. This may be the one put forward next week for people to say yes or no on, so be prepared for that. If you don't like it, add comments. All right, thank you, Christophe, very much for your patience, as Jim said. All right, did you... No, we can, I didn't have time to work on it this week, so let's keep it going. Okay, excellent. Let's see, Clement is not here. So I think we talked about this one last time, the data encoding thing, but I don't think we made any progress. I think he may need to go back and address comments on that. But since he's not here, we can't talk about it. However, he's next too. I don't wanna talk about them per se since he's not here, however, I do wanna draw people's attention to them. The first one is just adding an architectural section to the primer. Because the primer is non-normative, it doesn't technically impact the spec, but it does give insight into what people are thinking about how they should use the spec or what our design decisions were. So please take a look at that when you get a chance. I wanna make sure that it accurately represents the consensus of the group. Likewise, with the SDK object model PR that he opened up, this is going to be making changes to a document in our repo itself. There's a SDK.nv file, I believe. He, in this document, put together some pretty strong recommendations for what SDKs should and should not do relative to how things get exposed to the user and stuff like that. And I really think the SDK authors need to take a look at that to make sure they're okay with it. On the surface, some of them sound reasonable, but if you think about it, it actually does put quite a big requirement on implementation details. So I think people need to take a very close look at that. So please look at those two when you get a chance. It has been out there for at least a week or so, so it's nothing that new. Are there any, do people want to talk about these two at all, even though Clemens is not on the call? Okay, not hearing it. So let's see, I don't believe Allen is on the call, but I did wanna bring this one up for people's attention to make sure they look at it, because it's been at the bottom of the list for at least a week or two now. So Allen Conway, I believe is the gentleman's name, he's trying to add some clarification text to the spec around uniqueness, in particular around, or trying to put forward the idea of in combination of the source plus the ID needing to be unique from the producer's perspective. And I know that there have been some comments going back and forth, and there's still some open comments on this PR, but I wanted to get a general sense from this group, whether using source and the ID together is headed in the right direction or whether people have some concerns about that, because I know a couple of people, only a few people make comments on the PR, but I wanted to open it up to the broader audience. So let me just sort of pause there. Does source plus ID sound right in terms of uniqueness, or does that raise any concerns for people? What's the scope of the uniqueness we're talking about here? I believe it's unique within the scope of the producer's perspective, whatever that means to you. But as a consumer, I could still see duplicates if somehow two different producers decided to use the same source and event ID. That may technically be possible. I need to go back and double check what you wrote here, but that may technically be possible. Because I think it would be useful for it to be globally unique in some sense. I think that is what Ellen wanted to achieve, to basically force or make the event producer responsible for choosing a unique source. So the event producer, basically it's an online two, four, one basically it's the producers, well, the producer must ensure that source and ID is unique for each distinct event. So basically he asked the producer to choose a source that is globally unique. Personally, I think that is well impossible, but that is the proposal he's doing here. Does that answer your question, Evan? It does answer my question. I think there are ways to using either GUIDs or authorities to make sure that it was globally unique, but it sounds like we're not trying to do that. Can you open up or add a comment to the issue? I can add a comment, yes. Please do it to poke on that. Anybody else have a comment on the direction people post here? So I don't remember if my comment is on this PR or some issue or something, but I for one do not believe we can practically enforce global uniqueness in Source 2790 because we don't know who the producers are. There's no global registry for them. There's it puts burden. If an open source project uses cloud events as their format, they will have to somehow make sure that the source is globally unique if they want to be conformant, if we require it. And that either means that they generate random strings or put the burden on their user to make it globally unique. And I don't think either scenario makes sense. I gave an example along these lines, wherever I put that comment where you can see more, but that's my thinking here. So are you suggesting that we basically don't do this PR at all and leave it as it is? Well, I suggest that we can clarify it, but we don't require globally unique source 2790 combinations. And in terms of clarifying it, what kind of direction would you like us to head? Great question. I don't know. I would have, we would have to ask someone who has confusion about the uniqueness because to me it wasn't confusing at any point. It's producer defined. If you have a deployment of an open source project creating cloud events, it's on you to make sure that your events are unique in the source plus event ID or source plus ID combination in your context. It's, I do not believe that if a cloud provider has a thousand deployments using that same open source project, it would not make more sense for them to prevent the customer or workspace or something ID before the source and event ID before thinking that they are unique. There's no way they could actually rely on our spec saying that the events are globally unique in source plus ID. It's just not practically possible without some kind of global registry. Okay, thank you. Neil, did you want to say something? Neil, I've noticed you came off mute. Can you hear me? Yes, we can now. I guess my point was, this is a problem that we've been hit again and again and again and there's lots of different ways of solving this. And in the context of cloud, you've normally got some kind of security context and the security context is something that you do have some kind of awareness over and you do have an operational model for defining things within your context that are potentially, that should be globally unique to you relative to everyone else. If those higher level security contexts are also unique amongst themselves and you naturally inherit a globally unique behavior, but at the same time, I don't think this spec is far enough long to make any kind of grand claims about that because until we get to security and context and things like that, then I think what we've got for now is something that could evolve naturally and I think this problem will naturally be solved. So it sounds like you're sort of in the same camp as Tappini then? Yeah. Right. Anybody else wanna? My guess is we're never gonna have a flat hierarchy globally. That's just unmanageable but there's always gonna be some kind of hierarchical context within which everyone is going to operate in order just to make things scale. All right, thank you. Anybody else wanna speak up? I agree with what has been said and I tried to write that as comments. I think there is still a need to clarify it but I guess I think one thing we can do is really specify and say that a consumer is allowed to ignore events that have both source and ID and maybe event type but source and ID maybe is fine. So they're allowed to use that for duplication, a de-duplication. So that as a clarification is good and then maybe in the primer say what also what you wrote down, hey, if you build a larger application you basically it's on you to make sure that the sources don't clash. What the language used here in the spec is too harsh I think because as the other said, it's not enforceable. That's an interesting approach. What other people think about that? And let me rephrase it to make sure I understood what you were saying there. Don't necessarily add text to clarify or to be as prescriptive about the values of these fields but rather focus on the receiver being able to do some deep logic based upon source ID type or something like that and then leave it at that basically. Is that what you're basically implying? Yeah, so it is you can do de-duplication on these field. That's okay to do. You're still a wallet consumer of events if you de-duplicate two events based on these but they were actually two different events. That is not your sort of fault but at the same time it's not the event producer, the code that has that must ensure it. Basically that's an unfortunate circumstance in overall of your application and neither the code of the producer nor the code of the consumer is at fault. It's the fault of how the overall application has been set up. And so if you set up the application don't do that basically. Anybody want to comment on the proposal put out there? Sometimes you guys are too quiet. Go ahead. So I think that sounds good except that it is actually in a single deployment of a producer or actually one producer however you want to define that is they are unique in the context of the producer. So when you say that the producer doesn't need to care about it it actually does according to the spec now. The problem comes when you have multiple producers or multiple deployments of the same producer let's say. And that's where if you combine events from two deployments or two different contexts that's when you can have problems with the do-thing on the ID. And I think explaining it like that would be great but just to clarify. Yes. So Christoph did you or did you not put a comment in there proposing that in the PR care remember? I didn't put a comment to that extent. I can do that. Okay that might be good to do just to get that thought process out there for people to think about it. From my perspective since no one else is jumping up I often tend to wonder about things like this whether these things I think as both some people have already said whether these things kind of solve themselves. For example if you are a quote real event producer if you know that people are going to use for example source plus ID as some sort of de-duping thing you'd be pretty stupid to produce non-unique values for those and you'd be pretty stupid to use that producer if it's gonna cause problems because they don't do that. So that's why I tend to sometimes think that you don't need specs to be too prescriptive here because people will just do the right thing anyway. And by being too prescriptive we actually may limit the usage of the spec because there may be some situations where a person either can't or just doesn't want to be that unique about these things and they're perfectly okay with that. So that's where I might tend to land on these things because I thought I think these things sort of just solve themselves out of just necessity. Anyway, anybody else want to raise a comment on here? Otherwise we're gonna continue the discussion back in the PR itself. All right, moving forward then that's it for the PRs. What I want to do now is quickly talk about some of the security issues. So I think one of the big milestones or one of the big items we have in this milestone is to address all known security issues. I believe that a fair number of them fall into the same category as this one that we have highlighted here which is doing things like encrypting the data, determining things like event confidentiality and stuff like that. So far most of those have either gone uncommented or someone like Clemens speaks up and says, don't go there, it's scary. And we end up saying, we're gonna deal with it after 1.0 or we'll deal with it as a follow-on spec there on top of the cloud event or something like that. But I guess what I'm getting to here is I wanted to know whether you guys on the call are okay with that general direction because I believe at least two of the PRs or issues out there had Clemens comment on it basically saying, let's not go there and I haven't heard any pushback from that. But I wanted to bring it up here to one, for you guys to look at those issues but two, give you the opportunity to voice your opinion on this call because I'm gonna assume at this point that silence means no one's really that interested in addressing it and I don't wanna make the incorrect assumption. Anybody wanna speak up on these things? Yeah, I was gonna say I think it's sorry we need to have an opinion on or guidance on. So yeah, I'll have a look at those. Okay, please, John. I guess I might have misspoke a little there. I think at least in one of the comments, Clemens did say potentially add something to the primer or maybe it was me who was suggesting it. I do think we need something in the primer to explain why we're doing nothing if that is the choice we do make. Just so people know what at least thought about it. I think particularly the immutability of the event context having some guidance indicating if it's mutable, something like what you commented about best practices or what you should and shouldn't change or if you change X, also change Y might be useful. Okay, can you be favorite and comment to that effect in that particular issue? Thank you very much. Now at least get the ball rolling because given the way things have worked in this group so far what ends up happening is someone puts an idea out there and then someone, thank God, volunteers and says, okay, I'll take a stab what people have mentioned out there. So if you can at least get the ideas out there, someone else can take the ball and put it together a PR even if it is just some comments in the primer. Assuming you don't have time to do it yourself, Evan. If you want to, that'd be great though. I will at least put a comment there. Thank you very much. All right, anybody else want to comment on the general topic around security? Okay. Can you remember why I thought this one was interesting? Hold on a minute. Oh, Evan, this was yours. Do you want to talk to this one? For some reason, this one jumped out at me as something we should be talking about, like making our Y. Oh, I spent, I don't know, nine months or so in security and so whenever I read a spec I start to think of all the interesting things you can do and so I looked at some of the interesting things that you could put in our different fields and it seemed like some of these are unexpected and we should either have a test suite, the test that the unexpected things go through or tighten up the types so that you don't get surprises. As a couple of examples, quotes in a smiley face, I think is allowed as a ID that seems a little bit, that seems like something that might throw off some JSON parsers or some libraries. I think type is actually allowed to contain Unicode so you could actually have a event type that is a smiley face or a wink or something like that and again, that seems like something that people might be thrown off by actually seeing on the wire. Interesting. So what if you'll think about that? Come on, guys. Okay, so from my perspective, I thought it was useful mainly because this could expose as you're basically saying problems in the spec from an era of building perspective. I think the last thing we wanted to get into is a fight between two people where the producer says, hey, I'm producing valid spec compliant stuff and the receiver gets this smiley face in there and it's completely unexpected and everybody else says, no, no, no, that's just too weird, we shouldn't allow that. So I was thinking the spec might need to be a little bit more precise on these things as you're suggesting there, Evan. But in terms of next steps, is it a matter of just looking at each property to figure out what rules we may want to put in place or is it more of a TCK kind of thing that we need to create as you were suggesting? I'm happy to create a set of TCK cases and if people say, no, those are ridiculous, then we go back and we tighten up the spec. Okay, that sounds like a good thing to me. Anybody else wanna jump in here? Okay, yeah, if you could do that, Evan, that'd be great, I'd appreciate that. All right, anybody else wanna come out on that one before I move on? Okay, we have only a couple minutes left. These ones aren't technically ready, however, since Neil, you're on the call. Is there anything you wanna say about 218? Can you hear me? Yes, I can. Yeah, it's probably the longest PR I think I've seen. Sorry, that actually hit me. I think it's probably the longest PR I've seen for some time. For me, my background is in events and streaming and so when I think about events, I think about them in the context of a stream and that stream is defined by the key and that's basically, this all goes back to data modeling as far as I'm concerned and I guess that's the lens that I see this through because I work for Confluent and with a company that effectively are behind Kafka. So it's hard for me to see how it could be the responsibility of the consumer to determine what the stream key is when you think about a relational table when you do your data modeling, you define what the key is for that row and that for me is like exactly what this scenario is doing. It's largely a data modeling exercise. And so the reason I wanted to bring this one up here aside from Neil being on the call is because Neil, that one sentence that I've highlighted in there I thought was probably the key one for me reading the latest set of comments. And I wanted to get a sense from the other people on the call. When it comes to a receiver being able to put events into particular buckets, I guess Kafka does, who typically defines what bucket it goes into? Is it the receiver as Clement is basically suggesting or does the producer give a hint through some sort of key? My stab would be the producer is the one that knows how to logically associate stuff to bucket them. That would be my off the cuff comment. Okay, thank you, Clement. Gem? I'll go with Clement's, okay. Okay, anybody else wanna speak up from their experience? This is Vladimir. Yes, I agree with Gem. It is the producer that would define where the things go. Okay. Okay, here's somebody. So yes, it's our producer that defines the key but it's not necessarily this events producer that defines the key because it can go through multiple middleware before reaching its eventual consumer. The original producer has no way of knowing how some intermediary might want to partition or split the events. And that's the point that Clement is making. Not that it's the consumer's responsibility but that it's the responsibility of the part or the whatever is putting it into Kafka, for example. It's their responsibility to set the key here. And if it's the semantic key, as you're talking about event modeling, that's the source. We already have a feel for that. It's not necessarily the source because it might be a transaction ID coming from the same source. It might be a series of related events that define the stream that can come from the same source. Oh, sure. Multi-level. If you want to go multi-level there, that's fine. One of the comments on this is proposing that we pull it out of the data payload. I don't think that's very secure providing anyone access to the data payload unless they have the correct credentials. The other aspect on top of that is allowing partitions and topics to be created. That's something that's done at design time. You can't just go into the data payload and say, okay, I'm going to go into the data payload. I might just go and say, I'm going to create a topic with two partitions or 5,000 partitions or even 200,000 partitions because they are all very workload and use case-specific. It's not something that you can automate. Like a database table where you define it, you model it, you shape it for the workload characteristics. It can never be done in an automated way because the implications of that are massive. Sure, sure. It's not about creating the topics. Let's say there's a producer A that creates the event. It goes through cloud A to cloud B where there's another producing component that access both the consumer and producer. It takes the event and wants to put it forward but split in a different way, partitioned in a different way. Now you're saying that we would have to create a new event basically that has the event key set differently instead of just passing on the event but partitioned differently. Hold on a second. On the other hand, let me just get Clems... I'm not Clems, jeez. Jim, since he raised his hand, so go ahead and Jim. This may have to be the last comment since we're running out of time. Oh, good. I think I'm echoing what Neil was saying. Maybe I should clarify my first comment. It's the person that puts it on the wire who gets to decide how to partition it. The original producer will do that. At the time he originally sends it. And then some intermediary will do that each hop along the way. And who knows or dare I say who cares how they decide to do that. I'm struggling now to understand how we got into this position in the first place and why this needed to be drawn out as some sort of spec item. So, Jim, can you give me a favorite and add a comment to the PR to that effect? Sure. Can I just have one final comment? Yeah, absolutely, please, go ahead. I'd prefer to go with the simplest solution. I'm not in favor of introspection or adding more administration overhead. I mean, if this spec can evolve, which is what it should do through our learnings, then the simplest thing for now would be to put it in as an extension. And if we then do discover that it just doesn't work, then we've learned and we can make a decision based upon that because at the moment there's a lot of different lenses that people are doing this problem through. If we can do something that's simple that people understand and it is an extension over the basic spec, then we know because it's simple it is going to work. So it's not going to want people from adopting this cloud event because we see a lot of our customers wanting to use cloud events, but at the same time I'm like saying, well, let's just see how this evolves because before we can use this properly within Kafka, we have a few kind of things that we need to figure out. Right, and I know IBM actually wants to get the Kafka binding done too, so we're anxious as well. So let me take this action item to go off and talk to Clemens hopefully tomorrow offline and see what his objection is to making it a Kafka transport binding specific extension for right now to at least get us over the current bump in the road. Okay, because I think he was still pushing back on that. So I'll take that action out of the talk to him and see if I can better understand it. All right, roll call, I apologize, we're way late, I apologize. So Jim, are you still there? Yes, I am. Okay, Christian, you there? Right here, hello. Okay, Matthias, let's see Matthias on the slide. Erwin, no, haven't I got you? Anybody else I missed for Roco? Joe Sherman, did you get me? Yes, thank you, Joe. Good, thanks. Okay, anybody else? Victor, Matthias is here. I'm sorry, who's that? Victor. Victor. Victor, got it. Okay, anybody else? All right, thank you guys very much. And I believe actually right now we're supposed to be having a phone call on something. What were you having a phone call on? Kupkan EU deep dive. There you go, Kupkan planning. So if you're not interested in that, you may drop. Thank you guys for joining. And I apologize for running over. Thanks, guys. Bye, thank you. Next week. Thank you, bye. Yes, guys, hours and hours of Zoom. You can't escape it. I'm gonna step away for a minute. I'll be right back. Okay. So let's see. Scott will be back. Mr. Barker, you're there. First off. Joe, are you actually sticking around or are you just? No, I'm slowly getting off. Okay. So I'm ready. One, two, three, four, five, six, seven. Missing somebody. Oh, I'm missing. Oh, Doug. Did I have a doc for this? I can't remember. Hold on a minute. So, okay. So this is the current layout that we have. Actually, I apologize. Byrne, you wanted to bring up a topic. Do you want to mention this one right now? Yeah, why not? I'm new to the call time. Let's hear about your like protocol. Now, the thing is that, okay, we are an open source workflow automation vendor. So I'm quite active in the workflow automation space. And I'm currently giving a lot of talks, talking about orchestration, choreography, event-driven things. And what's currently coming up quite often is that even in serverless worlds, but also like in the whole cloud-based space, a lot of people are, yeah, missing kind of work for functionalities. All the top vendors are building something like AWS step functions or Azure durable functions or Google Cloud Composer and all these kind of tools. But they're not represented at all in the cloud-native landscape. And when I discussed that, like at the serverless days, Hamburg, Christoph approached me and said, hey, probably for KubeCon there's an opportunity to work on the landscape anyway. And that might be also an opportunity to include that category in the landscape. And that's what I wanted to bring up in the call actually and to discuss. Yep. So we do have a topic on the agenda for the bigger serverless working group, where I believe Scott and Dan, we're gonna take the lead on seeing what we needed to update on the serverless docs. And that doesn't include the landscape. So I think that would definitely fit into there, obviously, very nicely. What I'd like to do is put your name there. I don't know if I can spell it right. Almost. Almost. Yeah. So what I'd like to do is it's just, I assume it's gonna get covered under that particular topic and that you three will figure out how you wanna move forward there. Well, I don't think, correct me, Dan, or Scott, if you're back, Scott, whether I'm wrong here or not, but I don't think you guys actually started that work yet. Is that true, Scott and Dan? That's correct from my side. Yeah. Sorry. My side. So what I was thinking, Bern, since this is obviously a topic that you care strongly about, maybe it'd be worthwhile for you to take a look at the serverless doc that they put together and figure out what types of changes you'd like to see in there, or even go as far as to write an additional section if you think that'd be worthwhile, or what sort of editorial changes you'd like to see, basically. Take the pen and all the run with it. Can you give me a good point or like where to start? So with what exact artifact is that, on like a data or? Hold on. NCF, serverless. Yeah. Okay, so here's our white paper. Yeah. I'll put that into here. And then we have, so here's the white paper itself. Now, hold on a minute. Where was the landscape? That can move. Oh, there it is. So there's this. Now, those, the two documents that you were thinking about modifying. Yeah, and to understand the scope currently. So in this color, just covering that serverless landscape, right? Not the whole cloud-native landscape, which is probably a different matter. Yeah, this is just serverless, correct? Yes. Yeah, okay. Okay. Yeah, and then the, oops. Can I just turn it? I don't know, she is just flipping out of here. The white paper as of right now is just a markdown document. Holy moly. It's just a series of markdown documents that eventually get pulled together to be in a PDF file. So I think if you were to just open up a pull request or whatever on this MD file right here, I think that will eventually make it into the real thing. Okay, I will have a look at it here. Okay. Don't make a sound. Okay. All right, so then going back to the bigger issue or bigger topic. And we have this general layout here. Have you guys given any thought to this? Are you guys still okay with the general layout here? Otherwise, I think the next steps might be just put together some outline PowerPoint slides kind of a thing. How do you guys want to move forward on here? Honestly, I haven't worked on my deployment pipeline. I honestly expected to start working on it a couple of weeks ago, haven't had the time. But I don't think it'll take me more than one or two weeks. Okay. And I do have some design docs for it. So I might be able to share those. Now, am I correct in assuming that, so we're talking about this section right here, right? Yeah. Okay. Now, am I correct in assuming that you'll have a little bit of a presentation as well as like running code to go along with it? Is that true? Yeah. The base thing is EKS now supports deployments using Lambda because to have access to an EKS cluster, you can use an AWS IAM role. And I'm doing a proof concept. I did a proof concept with Lambda that deployed using that IAM role. The thing is to do a deployment, you do need to transmit like what app you'd like to deploy, what version, what options and stuff like that. And that's where Cloud Events comes in. I don't know what code I could run though. Maybe I will definitely jump through the code and showcase how Cloud Events are used. Okay. So your thing is this is more just a presentation and not running code? Code would be run, but not written live. Oh, no, no. But no, I'll run it, yeah. It will be funny deploy or something like that. Okay. Yeah. Okay. Okay. So to tell you what, since it sounds like you guys haven't done a whole lot yet relative to actually putting together stuff for the presentations, why don't I reach out to Chris and check just to see if we can get the template that they want us to use for this stuff. And if that's available, I'll just create some placeholder files. And then as you guys get time, you can start adding more to it. At least that way it feels like we're at least making some sort of forward progress. And that way you feel like there's a document up there you can make slow incremental progress one slide at a time. Hey Doug, is there a serverless working group email list? Yes, there is. Because I assume that the subgroup that started cloud events is not the original serverless working group group. No, it is there. They are technically separate groups with separate mailing lists. It's just, it's the same group people as of right now. Right. So maybe we should also engage that group and say we're gonna try to update the doc and maybe those individuals are also interested in participating. You can do that. If I can find the darn mailing list. I know it's out there somewhere. I'll find that. Okay, so tell you what. Let me do this. Where's my cursor? I'll take the AI for that. And you're talking about just further updates of the docs or were you thinking about getting their input on everything? Yeah, it looks like there was 26 people that contributed to the white paper. Potentially those people are also interested in updating it. Okay. That was my thought. Okay. That's fine. I'm not sure I got it. Okay. I can do that. All right. In that case, is there anything else you guys want to talk about on this topic? Otherwise we're gonna end early. I just want a little bit of clarification on these, well basically the third thing, the serverless work group session. So my name is under what, as of now it's kind of unclear to me what the serverless summit is exactly going to be and if it will make sense to put, to stuff I want to talk about like as part of the serverless work group or if that would rather be a separate talk or how that would be handled best. Yeah. I don't know to be honest because I don't yet understand the serverless summit in terms of their structure and topics because my initial reaction was that it's a direct overlap or at least some of it will be a direct overlap of what we want to do here. What I'd almost rather do is see if we can get Chris Anacheck to agree that maybe what we should do is take this session and turn it into a session at the serverless summit. But I guess, well, okay. So I guess that doesn't really answer your question about whether you should talk about your stuff under here regardless of where it's, where, which summit it's in or whether you should do your own CFP for the serverless summit, right? Well, is the serverless summit a CNCF thing? Yes, I believe so. Is it the same time period as KubeCon EU? It's gonna be... I think it's the Monday before and then KubeCon starts on Tuesday or something. Okay, great. Yeah, so it's right before it. Yeah, I think it is right before it. So... What I can also, like the thing I would talk about, I could also make a call request on the white paper and then maybe we can have a session where multiple people talk about what has changed on the landscape and yeah. And then I would have one part there. So everybody talks about what they have contributed basically. That would be one idea that it's sort of clear that it comes out of the serverless workgroup. If that is what we want to do, on the other hand, we could also make like, hey, that speaker happens to be part of the serverless workgroup, but it's not really relevant. It's just giving a talk. So I'm sorry, I think I need food. I wanna make sure to see what you're saying there. So these are the two topics that you were talking about, right? Well, I think what I wanna talk about is there are vendors that bring out their own function as a service to use with their service. So for example, well, PayPal brain tree will use the function as a service. Ops zero has their own function as a service. Twilio has their own, Adobe has their own and so on. So we have basically function as a service providers who are not a general purpose cloud, but are really specific to be used with another service. So that is very interesting because we haven't had that before sort of in computing and that your code is then really distributed across cloud vendors. And then you run into a lot of fun things that maybe no one has fought through. So that is what I would talk about. Is that something that you think that the serverless working group should take action on? Personally, I do because I think it will end up, if you don't have a standard or anything, basically I use five services and each service comes with their own function as a service. I have to have, I have five different interfaces of what is a function, how do I deploy it? How do I get metrics or logs out of there? So it basically becomes a mess, I guess. I haven't been in that situation but it's what I imagine it to be versus if there would be clear standards around what it is or if everybody would agree to run, I don't know, Knative. Let's say that everybody would agree on Knative, then it will be, I could deploy it on each cloud vendor individually, but I have a standardized interface to do so. And when you think about talking about this, how long do you think your talk would be? That can depend. I mean, I can do like live demos and talking details about all these products then I can stretch it out. But I guess at the minimum I need, I could also do like a lightning talk or talk really quickly. Here's what they're doing, a few examples and then talk about the problems I see and then I'm done in like 10 minutes or so. Okay, well the reason I was asking for how long the topic is is because the first thing that runs in my mind is that this sounds like it's a big enough topic to warrant its own session at the serverless summit. However, because you seem to think that it would directly impact the serverless working group in terms of what we look at in terms of possible standardization going forward. What I was wondering was whether, aside from the session at the serverless summit is whether you could condense it down into like a very brief five minute, like you said, lightning talk for the serverless working group as a way to jumpstart the conversation with the community to say, do you guys agree with Kristoff that these are some areas where we should look at possible standardization? And then you could use the five minutes to talk very quickly about the use cases or the scenarios that you've run into. Does any of that make sense? That does make sense, yes. Okay. Because I do think it's a very interesting topic and I wouldn't want you to not submit it to the serverless summit and I do think it sounds like when it's worthy enough to get its own time slot basically. But I do want to use the information in our birds of the feather session or with the interaction with the community. So I don't want to do both. Okay, that would also make sense. Okay, anything else you guys want to bring up here? Okay, I'm going to assume then that the lack of discussion is just because everybody's busy with everything else and that will ramp up as we get closer to the event because it's still a couple of months out. In terms of next steps though, I know, Bern, you're going to look at making a PR and actually Kristoff, you mentioned possibly making a PR to the white paper as well. Obviously that'd be very welcome. But in terms of next steps, do you guys want me to schedule another call now for a well known time or do you want to wait until we get closer and then when Panic sets in, we'll set up a call? How do you guys want to work this? I think setting a call for like in one or two weeks would be good, just as a reminder for me to shame myself that I still haven't started. I like the way you think, because I'm the same way. I need to have many forcing functions. Me too, but I would probably go for two weeks not for next week. Yeah, I was thinking two weeks as well. Okay, so what do you guys think? I'll set up a call for two weeks. I guess I assume after the regular phone call is good for everybody, right? Yep, yep. Okay, okay, hold on a minute. Let me make a note of myself for a minute. Okay, cool. All right, anything else you guys want to talk about? All right, in that case, we are done. Yes, Scott, Panic. All right, in that case, we're done. Thank you guys very much and we'll talk next time. Bye. Bye-bye, thank you. Bye-bye, bye-bye guys. Bye-bye. Oh, Bern? Bern? No, okay, it's not. All right.