 All right, Tom Abbott, are you there? I am. Are you new? I apologize if you're not. I am. Definitely new. I'm from Okta. Okta? KTA. OKTA. OKTA, thank you. You're going to be a regular on here, I hope? Yeah. Excellent. Cool. Well, welcome. Thank you. Varun, are you there? Hey there, again, here, morning. Good morning. All right, who from the pivotal side do we have today? Maybe you guys can add your name to the agenda. Thomas, are you there? Thomas, you on mute? Hey, we're here. Rachel's here as well. Oh, Rachel. Excellent, thank you. Rachel, gotcha. Thomas, we're waiting for you. I was just noticing, you made a comment on an issue related to the distributed tracing stuff. Does that satisfy your AI, or does your AI specifically mean a PR? So the original, I think I had one AI that I basically add on the issue. I'm filing this to report my findings. So it was my AI, and it can be auto-closed at a later time. OK, file market is done then. The interesting thing, though, is I found out that there is a separate directory under the open tracing spec that has something that might come back again as a potential avenue for correlation ID. So we can talk about that a little later if we'd like. OK. Austin, are you there? Hi, Doug. Hi, everyone. Jim Curtis, are you there? Yep. Excellent, thank you. Let's see. Who's at 925-699-0277? News, John Mitchell. I'm stuck in the car. I gotcha. OK, I gotcha on the agenda. Thank you. Thank you. Yep. Klaus, are you there? Yes, I'm here. Hello. All right. All right, what about Chris? Borchers? Yep, I'm here. All right, thank you. It's like a little bit of a game here. I see the list changing, but I'm going to have to figure out where the new name appears and the list tends to jump around. I feel like your auto-correct does not like my name starting with a K. Well, you never know whether it's the auto-correct or just my bad typing. Either one's a very good suspect. I assume that's the pivotal guys, right? Mark Fisher? Yes, hi. Hello. What about Juergen? Yes. And Thomas? Yes. And Scott? Yes. OK, sounds like the same voice, but I won't say anything about that. They're all here. They really are. I believe you, OK. All right, what about Michael? Michael Payne? No, it's me. All right, gotcha. William, are you there? William, are you there? Yeah, I'm here. All right, what about Kathy? Yeah, I'm here. Excellent, thank you. If you want to admit it or so, what about Cooper Marcus? I'm here. Hi. Are you new? I apologize if not. Yes, no, I'm new. My name's Cooper. I work on the ecosystem and product at Kong, we're an open source API gateway in Kong, Kubernetes ingress controller. And I'm here to learn a bit more about the serverless working group and how we might extend Kong to support serverless and Kubernetes. Cool, and is it KONG? That's correct, yes. Excellent, thank you. OK, cool, thank you. You're welcome. David Lyle? Yes, I'm here. Excellent. Let's see, who else am I missing? I don't understand somebody. Clemens, the show. Do I have to? OK, Clemens, yeah. Yes, you have to. Even though it's a public holiday, so technically I'm not here. Yes. All right, Tavo, we'll give it another 30 seconds or so till 12.04 my time then we'll get started. Is there anybody else I'm missing on the agenda? Doug, this is Arun, I'm here as well. Arun, great, thank you. That's about your name, right? All right, anybody else I'm missing? Thank you, everybody. OK, Tavo, why don't I go ahead and get started? The last. All right, so Arun, did you want to talk about your AI? I will, yeah, totally. Thank you for the reminder, actually, first of all. I was just in the process of sending an email and I guess I got to figure out which email ID am I subscribed to because when I tried to join using argu.amazon.com it says that is already part of the working group. But when I tried to send an email with argu from argu.amazon.com it says the message was rejected. So I guess I need to figure out that exact logistics on the email ID. I don't want to send it from my personal Gmail ID. So once that is sorted, I have the email drafted ready. I sent it out, so I'm happy to send that out. So that's part of it. Just to let you know, I ran into a similar issue yesterday and what I had to end up doing was rejoining the group and giving them a password and stuff like that. So look for an email from me from last night with a URL to the URL to join. Yeah, and that's what I went. I literally went over there. So when I go to the URL that OK, join this user group, join this working group, it says this email ID address is already registered to use it, log in. I guess I got to figure out how do I log in because I don't remember joining it from my Amazon ID. Yeah, you got to go through the login process and I can't remember for sure whether I had to say that I lost my password or not, but I went through the entire login re-registration process and then it started to work. Yeah, yeah, yeah. So I think that's what I'll do. So hopefully during the call today, I should be able to go through that process and send the email right away. And in the email, I have mentioned that what is the kind of information that I'm looking at. Essentially, what I'm looking at is we want to see the customer names on who would like us to support this as part of AWS Lambda. Why this matter? What problem is it solving? If they have any timelines on the implementation of this and more importantly, how would they like this format to be supported? Is it like natively supported? Is it like as a Lambda function or is it consuming? Is it generating? Is it both? Is it in a digital format? So that's the kind of detail that I'm looking for. Yeah, okay, sounds great. Thank you. And one more on the related topic. I've been working with the Lambda team quite regularly, kind of giving them update from the serverless working group. Hopefully starting next working group, maybe J Nair, who is one of the PMs in the Lambda team, he should be able to start attending these working group calls regularly. I've been constantly raising the priority of this working group to the Lambda team. We should have a direct representation from the Lambda team itself. That sounds great. Yeah, I didn't know Vijay moved to that. I worked with him in Amazon WorkDocs, so it'll be good to have him on. It's not Vijay, it's Ajay Nair. Ah, okay, not Vijay Nair. All right, still good to have somebody on. All right, that sounds wonderful. Thank you. All right, then moving forward, let's talk about the face-to-face vote. I somewhat shocked there was a slight winner. Oh, it went the other way. Interesting. A whole bunch of people just voted. Completely changed the results. Okay, so as of right now, and we did say the vote was going to close at this call, so as of right now, June 15th is the date for the face-to-face, which completely messes up my other document that I created. So hold on a sec. So what I'd like to do now is, I will update this document here. What I started to do is create a document for the face-to-face. I'm going to change this. Okay. Yeah, I know Clemens will be very happy. Okay, so what I'd like to do though is get everybody who's planning on attending to add their name to this list as soon as possible. Because one of the things that, actually I think it was Clemens, I think you brought up the question of how many do we need to actually have quorum? Because obviously if we don't have quorum, we may not want to hold the meeting. We still could, it just won't necessarily be a binding meeting with votes or anything like that. So if people are planning on attending, please add your name to the list as soon as possible. And I'm not quite sure what the right number is to mandate there, or to say that we have quorum. Hopefully when we get there, we'll know it. But in my mind, I'm thinking if we can get, at least eight or so of the voting companies to say yes, they're going to show up, that might be the right number. But let me just pause there. Are there any questions or comments on that? Suggestions for alternatives, through determining whether we have quorum, anything? Sorry, I missed the meeting last week. Where are we planning to do this? Good point. As of right now, it will be San Francisco. Okay. And we don't know the exact hosting spot, although I think Austin, you volunteered the location, is that correct? And if that goes through, Google's happy to host it. Okay. Yeah, like if needed, like Oracle might also be able to do it if it's in the Bay Area. Yeah, I figure almost any company, yeah, almost any of us could probably host it. I need to, Austin jumped in there first, so. Okay. Cool. Any other questions, comments on the face to face then? All right, excellent. Thank you guys very much. And thank you all for voting. All right, so next work stream item. As of last time I checked, which was around 30 minutes ago, we had a clear winner, which was this workflow functions composition one, which got the most votes. We did have one person, I think it might have been Chad, who voted three times and put an asterisk next to that, but even if you try to figure out which way he was really gonna go, it doesn't change the vote. It'd be still a very clear winner there. So the question that I have is two things. One is, can we get someone to answer my question here to give us a little more clarity on exactly what we're gonna be producing here, right? Is it a specification? Is it a white paper? Just something that we can then take forward when we go to the TOC to propose this next work item, because we need to get their approval since this is falling under the serverless working group activity, not under cloud events, which is a separate organization. So we need to get TOC's approval for this other work item. So I'd like to get a little more clarity on what we're actually gonna be producing there if someone can help answer that question. Okay, I can help answer that. Okay, that'd be great, thank you. The other aspect of this is timing. Do we want to set a milestone for us for when we're gonna begin this work? So for example, do we wanna start immediately once we get approval from the TOC? Or do we wanna wait until the cloud eventing spec has reached some milestone, for example, 1.0, before we start our work? And the reason I'm asking is because if we start before we reach some milestone like 1.0, we then run the risk of dividing our time and that may impact our forward progress. So let me pause there and see if there are any comments or just comments in general about that. I think there is value in having some parallelism because it may impact cloud events, adding more metadata, et cetera. Yeah, I think so. Also at the beginning, I think there's quite some, like what's the functional scope, like the question you ask, the specification, those need to be sorted out. And then when we sort it out, some going deep, we might find out, yeah, we need to add some more metadata attributes to the cloud event. Any other comments? Just as a me too almost, just I would really feel happier with our spec in general if the group had more experienced building applications out of this, and I think this is a great practice that could even be considered blocking for 1.0. Okay, anything else? Any other comments? Okay, so in terms of being parallel then, does that mean start immediately once you get approval from TOC or is there a pre 1.0 milestone we'd like to reach? I'm interpreting the previous comments as probably starting as soon as possible, but I don't wanna assume. So I would think that we can start off the TOC approval. Okay, any other comments on that? All right. Okay, so we'll head down that path then unless I hear any objections. All right, cool. Any other comments then related to the work stream discussion overall? All right, great. Thank you very much. In that case, Kathy, would you like to give a summary of how the correlation ID discussion went the other day? Okay, so I think the conclusion is we will go for like kind of like pop play back that format. So we're going to the original sender will specify the key value pair in the pop play under that pop play was at that scope. And then, you know, if that event goes through some intermediate routers or gateways and those intermediate entities can add additional pop plays on top of that. I think we have not reached conclusion with the additional the intermediate gateways or entities could, I mean, it could modify, you know, the original field, but I mean, original key value pair, but we haven't made a decision on whether, you know, those modifications should be put into the on top of the original pop play bag or just, you know, just modify the regional key value pair in the regional pop play bag. Yeah, that's pretty much it is. Okay. Should the group expect a PR to come soon relative to this discussion or are there more discussions they need to have? Kathy? Oh, okay. So you ask me, okay. Yeah. I think we can have a PR for that. And then the whole team can, you know, can know what specific details, yeah. Okay. And do you have that? Will you be writing that PR? I'm just trying to figure out who the AI should be assigned to. I can do that or Clemens, would you like to write that? Or is Clemens in the meeting? Yes, he is, but he didn't pay attention for two minutes. So what work did you want to assign me? I said, do you want to write this? This is what I caught. She wants to know if you want to write the PR based upon the correlation ID discussion from yesterday. I can write it, that's fine. Yes, yes, that would be nice if you would write it because I have a ton of stuff to write. Okay. Okay, great. Thank you, Kathy. If I might also suggest one name I might recommend is correlation context just to align with open tracing. It seems like we could even directly if we use the same word. You have that, Kathy? Yes. Did you hear what Thomas suggested? Oh, okay, sorry. Okay, what's that question again? Sorry. There says correlation ID discussion summary and I apologize, I missed this, I was on a plane yesterday. There exists a header I just discovered for the distributed tracing spec called correlation context. It might fit all of our needs and it might be nice to use that same name and specify that in HTTP it has the same value that when we get to align with more specs. Okay, I'll take a look at that. And then I write the PR and then you can give comments or suggestions, anyone? Sounds great. Okay. All right, great. Any other points of discussion around the correlation ID offline discussion? All right, cool, thank you. All right, let's move on then to PR review. Hey, Doug, I've got one other question for the group. So we just to move back one agenda item, seems like there's a lot of interest in that function and event workflow specification. I'm just curious what, maybe we could just chat like just for a couple minutes as to what people have in mind there. I know you asked for people to chime in with issues but I wonder if we could just kind of do a quick poll and see what people are thinking. Cause there's a lot of interest in there and I'm just wondering what they have and what these people have in mind. Sure, who wants to go first? So maybe Doug, could you pull up the workflow? I mean that proposal, so people can read that. There you go. Yeah. I guess for me, part of a couple of things that come to mind for me or maybe something of a specification or kind of definitions around input and output expected from input into one function and output of a given function as you go to chain them together. Yeah, I think that that will be part of this specification. So from my point of view, I think this workflow specification should include to specify what combination of events triggers what functions. For example, is it like one event trigger that function or is like two event together trigger that function or either of the two events or of the three events could trigger that function. And then also is it triggered? Does it just, I mean do the events just trigger one function or trigger multiple functions? If you trigger multiple functions, those multiple functions executed in parallel or in sequence, so that's one aspect. And another or it could be some cases in the workflow, right? For example, the second step could like those functions, some additional functions might not be need to be triggered by any event. It just, when they reach that stage, it just start executing those functions. And also another aspect is as you know, lead brought up. So how the information should be passed between the functions or like, you know, the information should be passed from the events to the function. How those information should be filtered and combined and then passed to the function. And the same will apply to, you know, how the information from one function's execution result should be filtered and then combined if there are multiple functions. Okay, then the information, the results should be filtered and then combined with the other functions and then passed to the next function or next sequences of functions. So I think, you know, on two parts, why is your specify, you know, the workflow, what events triggers what function, how the function are executed in sequence in parallel or branching, even branching, you could have some switch state. And then another aspect is how the information is not passing between events from events to functions and between the functions. So Kathy, do you see us actually producing a specification that says this is what, well, let me phrase it differently. I understand what you said there, but what's not clear to me is, are we producing some sort of technical specification that says here's how an application developer specifies the list of functions that get invoked in what order or are we just writing a white paper that says these are the types of broad functionality or broad functional features that a platform should offer. Okay, good question. Okay, I personally think we should have both. Why is, you know, what kind of function, the scope of functionalities will be covered by the workflow. And another is if we can have, you know, some specification, which will be uniform across, you know, any platform, any service platform, that would be good for the user. As a user, if I'm a user, right, I just write one, you know, workflow specification and it could run on, you know, Google Cloud or Amazon Cloud, Microsoft, Huawei Cloud, you know, that would be great as a customer. Okay, there are two aspects here. One is the, you know, definition of the workflow, you know, like in step functions, the JSON that describes the state transition. The other one is, you know, for example, the message that traversed the cloud events with sort of a correlation ID, workflow ID, you know, other things that need to be passed between one workflow step to another. So those are two separate things. Do you think we need to do both or one or the other? But I think, you know, correlation ID will be implicit, will be part of the workflow specification because for any workflow, right, if it scales out, we must solve the problem. How to, you know, how to, you know, how to say, how to send those event to the appropriate workflow instances, then we need a correlation mechanism to do that, otherwise you could send, if that workflow is triggered by three, if that workflow involves three events, right, and there are many instances of each of those events, then how do you know which event send to which workflow instance? So that's a must-solved problem if you want to work on the workflow. Right, but you want to specify essentially both, you're saying. One thing that sounds illustrative, we decided not to define the sort of a speck of a function, but we will define the speck of a workflow. So yeah. Yeah, we define, so when we define the speck of the workflow, the user need to specify, say, which key value pairs or which combination of key value pairs that can be used, I mean, in that of that event, that can be used for the work, for the, I mean, I mean, by the serverless platform or any entity to correlate and then to correlate this event with another event and then send it to the right server, I mean, workflow instance, right? Because the serverless platform or any entity handling those event and then trigger, I mean, trigger in those functions or how to say host or the instantiate whatever container VM to run those functions. That's those serverless platform do not know because it's specific to each use case application. And the developer of that serverless application knows the best. So the developer, when he specified the workflow, he needs to specify which, you know, which key value pair of, you know, that event can be used correlate with the key value pair of another event from another source. By key value pair, do you mean key value pair in the payload or in the cloud events envelope? In the cloud event envelope. Yeah, because we already discussed, say, the payload could be encrypted, right? Also, we do not want, you know, the serverless platform to go deep into the payload too. Okay, got it, thanks. I guess as a user, I'm not quite sure how this helps me because does this eliminate the need for me to migrate all of my functions from cloud to cloud or open source project to cloud? Is this doing something at that level for me or because I'm mostly concerned about if I build a bunch of functions for one cloud and I then transfer it to an open source project without changing all my code and having to go through all the audit and all the build stuff and get it redeployed in a new location that's basically a new application. Yeah, what I was saying is that essentially, let's assume you're gonna standardize the workflow description, but there's no way to standardize the function description. So it seems to be evolution-wise that first we need to standardize like a function YAML spec, you know, and then decide how those are sort of chained together in a workflow, because there's not much value in describing the relations between functions in a specification language without being able to define specification for a function that will be cross-platform. So I think you've defined an HTTP transport which is pretty generic and could be used across things that aren't even functions. So in that case, it feels like we do have some building block for workflows. Yeah, I mean, I would love to actually define, well, take some limited set of features that we believe are important for workflows and actually define their exact meaning. We believe filtering is important. We must be able to filter at least in these fields. We believe that joining is important. A system should consider how it does windowing. What do you do when half of the join is dropped? There's a lot of things that I think we can come up with a spec that multiple pieces of software can interrupt with which means that you can have both open source or even proprietary solutions that customer can feel confident that the semantic meaning of their product doesn't change if they go from cloud to cloud. Okay, I didn't mean to hijack the whole conversation. But it was just useful to get a poll of what people are thinking on the subject because there's a lot of excitement. Just wanted to better understand if we were all thinking along the same lines. And overall, I think this is an exciting initiative. I don't know if this is coming up with our own serverless app model, our own workflow spec, our own open API for event-driven workflows kind of spec. But I believe that there's a lot of user problems we could solve here. And this is kind of what serverless is all about, defining these business logic workflows. So I think this is definitely a good thing for the working group to tackle. And it's helpful to hear everyone's opinions on this. And hopefully we can get together and get some proposals out there. Yeah, and I think once Kathy puts together a CRISPR definition of what we're actually producing. And Kathy, if you can also include in that the notion of what's in scope, what's out of scope, stuff like that in the proposal, if you could put all that into a Google doc so that people can review it and work on it and tweak it as necessary. I have a feeling it's gonna take several iterations for us to get to some of that the entire group agrees with. Does that make sense? Yeah, that makes sense. So I think I'm going to first put the scope of functionality there first. So once we agree on that, then you know, we can work on the specifications. Yeah. And we can include something so that as a user, I can understand why I should find this valuable. Because what I have seen in other enterprises that a lot of what we're doing is writing a function and we're not really streaming a lot of things together yet. We're not at that level. And a lot of companies, I don't see as much value in this. I'd like to know where that value prop is. Again, it's probably, you're probably about right. It might be for those that have built large portions of their application functionality into functions. And then having that piece of mind that you and one of the benefits is having that piece of mind that should they need to migrate the whole components of their application, which would be functions out of one system and that it could be important to the next that might support the ingest of those workflow definitions. Then it sounds like it might be good to have a target audience for what this type of issue is solving for, to differentiate from who might find value in it. Does that understand that concept that you're talking about? That's reasonable that some companies would find value, but that'd be a different persona than my company, for example. And to Dennis's point, that might even reinforce the notions that we may do well to do something of a white paper. And then from their gauge interest as to whether or not a full-blown workflow definition spec should be sought after. Yeah, I think that makes a lot of sense. So I will give some examples, some workflow examples, so that people can see the usage scenario, application scenario. Because I think when we really go down the service path, we're going to find out the many usage, many service use applications are actually not just a simple event trigger, a simple function. It will involve multiple events and multiple functions. Yeah, that'd be helpful. Thank you. Okay. All right, great. And just to chime in on the user demand real quick, you know, our company's project, the serverless framework, I'd argue kind of already has a very lightweight version of this. It is serverless.yaml configuration file. It allows you to kind of model out your application as functions and events. And it's not a workflow solution yet, but it's a start at that. And I will say that we have, that project had a lot of success. That whole idea of modeling out your app as a series of functions and events, I think has made the serverless application kind of accessible to a lot of people. So I think that there's a lot of value here for users. And we've seen a lot of demand for wanting to do more on that front. And I think also step functions from AWS is a good example of this as well. Yep. All right, cool. Thank you. With that, I think we should probably move on. I think we're going to go back, I guess, and I think we're going to go back and forth a lot once Kathy clarifies or puts that on paper what we're going to be doing here. So all right, so cool. Thank you guys. I'm moving forward then PR reviews. So the first one on the list, I actually don't want to review today. It's the NAS transfer binding. I just wanted to bring this up because the NAS team could not make the call today and it was on the agenda. So I just want to bring this up with people's attention that please review it when you get a chance. It's a fairly lightweight document, but we'd like to see if we could try to get that one maybe reviewed and approved next week unless there's some, you know, large issues with it, but it seems fairly straightforward. So just a reminder to please review that one when you get a chance. All right, next up, Clemens, your MQTTT1. And just remind people, we talked about this one last week, giving people one more week to review it since it seemed like the discussions had died down. Clemens, is there anything you want to mention on this one? I don't think it's changed this last week other than I haven't shown you anything, but it's I think the only open point and that's something that I still want to go and correct by just enough time yet, is there's a, I think there's a change we should make across the specs for the properties. And I don't even know what that is now. Is that a change that you want to get into the PR before we merge it? No, I think that's a PR, that's a correction PR I want to go and do across a few documents. Okay. So I think this is good as is. And there is a, let me scroll a little bit further, maybe I spot it. Because I don't have that open right now. I'll find it. And it's a minor thing. Okay. Well, I have heard from other people that there are some other minor changes that people would like to get in there and they're okay with doing them as follow-on PRs, like we talked about last week, so. Yeah. So it would be great if you could get that in because it's a fairly meaty one and then go and just start iterating over it. Right. All right. So with that, are there any discussion points around this people want to bring up? So which directory will this go into? The top-level one as a sibling to the other files. Oh, okay. I have a question, Doug. Has anybody tested this yet? Nobody had tested the NQTT spec before it went in. I think that doesn't mean we shouldn't test our work. I wonder if there's any customers or any people that use MQTT validated that this isn't true to them? The IBM people that I have talked to or that have come to certainly think that they're useful. So Alex, I think that's a great question and I think that might be a broader question for before we reach 1.0, which of our specifications do we feel comfortable taking 1.0? Because I probably should not assume that all of them go to 1.0 at the same time. Some may require more reviews or some may require implementations before they go 1.0. I think that's an excellent question, but I don't think holding this one up to a higher bar than we did for the other ones would really be appropriate at this time, though. I don't think it's about, so maybe I'm not explaining it. I just mean having somebody use it, try to use it. I don't know if anybody's tried to use it as it's lined out right now. It seems it's gonna be a spec. It might give some good feedback. Well, wasn't that part of the discussion that we had around when we get to say a 0.9, we need to let it bake for some period of time, have people implement, use odd events across it before we can officially call it a 1.0. So if there's no implementation of MQTT that people have agreed on, we could hold up being able to certify there's a 1.0 specification. The argument you're making is there can't be a draft unless there's an implementation, which is kind of difficult if there's no draft. It's labeled a working draft, though, right? That's exactly what it is. So what we're doing is we're just kind of, we're creating a clear high hypothesis that hopefully people can go test. This is the draft, okay? Yeah, I missed the draft point. Yeah, everything we have right now is a draft. But Alex, you're making the exact right point, which is we need people to actually... Yeah, I missed the point that that was a draft point. Yep, yeah. It's just moving it out of a PR that no one's gonna find to something on our main page to actually find it and then start implementing it. All right. With that, are there any other questions about it? I think if I can find the time, I'd be interested in trying this out. Yep, sounds great. All right. Any objections then to approving this one going forward with the assumption that follow-on PRs are always welcome? All right, that's been approved then. Thank you guys very much. Hold on. Thank you very much. Nice work, Clemens. Yep. And everyone who collaborated on that, this is exciting. Yep. All right, next PR. I think it was two weeks ago. We first talked about an update to the roadmap and Kathy, I believe, wanted two full weeks to review that. So we have two weeks, we're past the two-week milestone now. I've addressed any open questions, comments in there. I think the only answer anyone was, actually no, there aren't any ones, I addressed them all. One just didn't vanish because I didn't actually change the line of text. But are there any questions or comments on this? And keep in mind, as with any other documents, we can always change the roadmap itself. This is just to provide us a very high level guide posts for our next set of work. Okay, no questions? All right, any objection then to adopting that? All right, cool. Justin, I don't believe Justin is on the call, but I will hold on a minute. I think this was just modifying what was some of Clemens' text that I heard correctly. Let's just double check here. On this topic, Clemens, did John McCabe from Open Fast Speak to you about the Azure Event Grid? It seemed like we were missing the content type for application JSON. I can't recall right now. It seems like this might be the related issue here. I think what we observed was that with Azure Event Grid, we weren't getting content type application JSON. And some of the messages, either the handshake or the cloud event. Oh, yeah, the content type is, the content type that's defined here is actually for the data. And we didn't put the, yeah, we don't put that in there. In the map. The HTTP header. Well, the HTTP header is for, we use the HTTP header. And the HTTP header is for the overall payload. And then the content type is inside of the payload is for the data element. And we omit that. Because we omit that because it's only significant if the data payload differs. If it's not, if it's inline JSON, then you don't need to go and provide it. That's interesting. So the assumption is for the work group, everybody agrees that the payload is always JSON. And you only think it's something else if there's a header. This is how we implement this right now. I'm just telling you, right? So that might even be wrong. I'm not saying that's right. I'm just telling you what it is. But that's one particular implementation choice, right? Yes, so the implementation choice is that we, if the payload is already JSON, we don't declare the data payloads any further because then you don't need to decode it. You don't need to have a hint. You need to have a hint if the data is a field that the string that contains base 64, then you kind of need to have a hint. Otherwise you don't. So I think though this PR is mainly just trying to add some text around the fact that not everybody follows the plus JSON type of syntax. And this is allowing for others to be there. So Kremens, do you want to talk to this one at all or do you think it's pretty self-explanatory? Kremens? Is this the issue that OpenWisc had that you mentioned on Twitter about plus media type? Yeah, yeah, yeah, yeah, yeah. Okay. No JSON, you must trust it. So I have a question. Here the data means the payload, right? No, I think it's time to do that. Yeah, I like it. In which, is that in the JSON mapping spec? Yes, yes. Yeah, I think I agree with that. So does the data mean the payload here or? Yeah, the content types is always about the data payload. So the point that's being made here is that if it's known to be a JSON type, if it's known to be a JSON type, but it doesn't follow it's not either application JSON or it doesn't use the plus JSON, but it's one of these other types, then you should still, and it's known to be JSON, you should still treat it as JSON. That's the point. So I, and I agree with that point. Any other questions or comments on that? As I said, I don't think it really changes much. It just makes it clear that some people that don't follow the specifications, the RFCs that we list in here. I might make it clear that we're just not mentioning JSON because you don't have to specify it when you're not cross encoding. So for example, if you had, if we define an HTTP XML envelope, then you obviously would need to take it when we flip to JSON. It's not that JSON is special, it's that having the same encoding for envelope and data is special. That's correct. So it's a default. That seems like a separate issue though, isn't it? Isn't that almost saying if content type is missing? Hold on a minute, is content type required or optional? I think it's optional. Let's just double check, because if it's optional, then I agree, we probably need to clarify that, but if it's required, then I don't think what you guys are saying is necessary accurate. Yeah, it's optional. It's optional. Okay, so then adding clarifying text that says when it's not there, it's assumed to be the same content type as the envelope itself. That sounds like a separate issue. That would be ideal if we run into that when trying to integrate it with this. Yeah. So would someone like to take the action item to open up a PR to clarify that? I mean, I open my mouth so I can do it. Excellent, I'll teach you. Okay, thank you Thomas. Okay, so then back to this PR itself. Does the text in here look sufficient or look appropriate to people? Okay, any other questions or comments then on this one? Any objection to adopting it? All right, not hearing anything. Drew, thank you guys very much. Oops, there we go. All right, Thomas, are you ready to talk about your source label PR? Sure, I can try to page fault that into memory. I think it sounds like we're in many ways converging, like my intention was for these labels to be the same as what I think we're calling correlation context. I was just in some sense strongly hinting that since a lot of CNF software uses the word label that that was my default stance. But I think in terms of how we wanna use it, it sounds like we're consolidating. Okay, so does that mean that once Kathy's PR for her other bag of stuff thing, it will subsume this PR? Or Kathy, would you want me to just change the word labels to correlation context? That's fine, I think we say we, okay, I think correlation context, that's good for me, yeah. Hang on a second, one of those sounds like plain English that's super easy to understand and the other one sounds like jargon. Right, why are we diverging from labels? Because, well, there is an existing header that has meaning for this called correlation context. And my first stance is, I would suggest this should be projected in HTTP as the correlation context header. So just to clarify again, at the face to face, we talked about two types of labels, one which are source tagged and couldn't be modified. The other ones that are sort of, let's call it transport or routing tagged and could be added along the way. So AR, we're going to specify both and I assume that this one is the first. Honestly, unless someone twists my arm, I would just say that we use what correlation context has been converging on, which is that each hop can modify this. I'm a little confused because I cut a sworn on yesterday's phone call. There was a lot of discussion about different types of attributes, labels, correlation IDs, whatever you wanna call them. And we weren't necessarily going to define the exact meanings of these various things. All we knew was there was gonna be a bag to put stuff. And sometimes it may have been for correlation, sometimes it may be for source identification. We weren't necessarily gonna get into defining what it meant. We're just gonna put a bag someplace. Calling something correlation ID or correlation, whatever, sounds awful a lot like we're putting semantics around these things and not just creating a bag. So I'm a little confused. The correlation ID, I see request ID and correlation context of the two bits used in the pace of people, is that right? It's trace date. The meaning of this thing is very wide. It's like ASP.net. Oh, in distributed tracing, it's trace parent and correlation context. I think probably, you know, Thomas, how about this, I'm going to write the PR to reflect what we discussed. I think there are two parts. One is, you know, we're going to put in the key value pairs for the sender could specify. It's on the, we can call it pop place or we can call it correlation context. I guess that keyword, either one, I'm fine. And then on top of that, you know, the intermediate gateway or routers can also put additional property bag or context bag on top of the original sender's property bag. Are they using the same name space or second name space? You mean the name space? Like is it, do the original sender and middleware have different property bags that they fill? Yeah, so we discussed if they have the middleware has additional key value pair, they would like that, you know, would like to put that on, add on top of that. But if it's existing, you know, like if that key value pair is already in the original sender's context bag or property bag, then just modify there. That part, we have not reached quite consensus, but I'm going to just put it out like that and then people can comment. For this written in Slack, I just wrote a wall of text today, Doug, if you could switch to that, could at least have something, like what we talked to at the meeting yesterday wasn't written anywhere and I wrote that example and it might help. It's in Slack in the cloud events channel. Yeah, I'm not sure I can share my Slack right now, unfortunately. This is another reason why we should try to test some of this work because we're trying to make assumptions about how the labels are going to be used. We could at least specify maybe a few examples, even if they're not fully tested in implementation. Well, it also sounds to me as though Thomas and Kathy are at least on the same page relative to merging their work together and it's more a question of the shape of the bag or whether it's one bag or two bags and that kind of stuff. So I'm wondering whether it makes sense to wait for Kathy's PR. Then we can start hashing over that. I also have another central concern about the labels having got slightly stung with trying to put characters into Kubernetes labels. They have quite a strict thing about what you can add there, but you can't have spaces. I guess we should, if we do have these in Kubernetes and need to be in annotations instead, they've got a freer specification. What does that Regex allow for? Does it allow spaces? I tried to copy the Kubernetes label Regex, actually, because I expect- Because the example given by, oh, sorry. The assumption was that Kubernetes is growing, CNCF and Kubernetes already has a precedent for routing using label selectors. And so that's what I was planning to end up building, actually, was a label selector sort of thing that would use this field and field native to Kubernetes. Yeah, I see something in the labels provided by, I think, BLAD in that slack channel. I didn't know that channel existed, actually. And he's got some arrays within the labels. And there's also spaces within the values. I don't think we could represent that in Kubernetes labels. Yeah, but I don't necessarily think you should look at that as, let's say, a concrete proposal quite yet. I'd rather wait for Cathy's PR and then we can go through those lower-level details, because I don't think the exact character set is critical at this point. That's not the highest sort of bit to put it that way. I'd like to know if someone can ping me when there's a PR. I think I want to try again to understand is there a consensus there's gonna be one set of labels, just one for source and routing, or is there is no consensus on that? There's no consensus yet. So how about we do this? Because in last meeting, we did not really got the chance to discuss everything and reached four consensus. How about I do the poll for another meeting? I think that's what we said at the end of the meeting. And then, you know, we joined that meeting and then we reached four consensus. And then I'm going to write the PR. How about that? I would agree. And Cathy, I would suggest that you try to get the meeting as early as possible next week, because you may need more than one meeting. Yeah, okay, yeah. So Thomas, are you going to join this time? I hope you can join because you have this PR. What time is the meeting? I'm going to do a doodle poll. So give several time slots and then, you know, to select the time slots that most people can try. Okay, I will fill out my availability. Okay, yeah. So I'm gonna send it to the work group email about that poll link. Okay. Thank you. All right, cool. So watch for another doodle poll come out. Thank you guys very much. We have five minutes left. Tell you what, I don't think we can just like deep dive into anything. But let me do this. Clemens has a PR out there for the AMQP transport and binding or map system mapping, a type mapping thing. I think the PR has been out there for a while. I don't believe there have been any major controversial points raised. Clemens, is there any reason for us not to shoot for review and approval next week or are there still outstanding large items on this one? I don't see anything in here yet. There are no large items in this. I did have a question there, which wasn't addressed, I think, around the similarities between the HTTP and AMQP. Where is it? Oh, there we go. So can you address this comments, Clemens, at some point before next week? Yes, yes, yes. So I thought that was clear because the AMQP binding exists solely so that you can, so the AMQP type mapping is actually not as complete as the JSON type mapping because the AMQP is kind of tied to the protocol, but I made the type mapping such that you can go and map the attributes of cloud events into the header section or actually the property section of the AMQP message so that you can do the binary mode, specifically for that purpose. What I didn't do is I didn't specify out the entire type mapping as I do with JSON like with the self-contained format. While you can do this with the AMQP protocol, you can express message bodies in the AMQP typing coding. It's something that we in practice in the community don't really encourage because the AMQP encoders are typically tied into the messaging stacks, which means if you want to go and route the messages further, then you have to go and unpack them, which is not true for JSON, like you can go and take a JSON and forward it on and then down the line, somewhere decode it. And that's not really possible with the reality of the AMQP stacks. That's why I kind of made the AMQP data format only for the properties. It was a different point sometimes. It was that like in HTTP binary mode, you have like C underscore something, something, and AMQP it was sort of something else. And then... I'm going to make those... That's the thing that I meant earlier. I want to make those things the same and just make the prefixes the same. Sure, yeah. So that wasn't clear for me if you could just clarify. Yeah, I'm going to... That's something that I still want to go and do to make that the same. I just don't have the key solution to this yet, but I'll pick a common prefix or a model. I had a discussion with Doug about this before the meeting, about something that's kind of similar. So we'll have to find a good way to do this, but I don't have a good idea just yet. Yeah, we just don't want to have a transfer specific logic for... Yes, I agree. Yep, okay. Excellent, but that's a change that you're going to make to this PR or is that a follow-on PR? I get the sense of the follow-on PR. That's a follow-on PR to all the mappings. That's what I thought. Okay. And it's probably going to fall since the HTTP mapping is already kind of done. It's probably going to fall that way. Okay, can you do me a favor though? Just put a comment or even a response to Euron's comment here, saying what you're planning on doing meaning the follow-on PR to address not just ANTP, but all of the transports. Yeah, I will do that. Excellent, thank you. All right, so please everybody be a chance to review this PR with the hopes of merging in this, again, draft specification so that we can get it into there for people to review and start implementing. Hopefully we'll get that done next week. And with that, I think we're pretty much done. Let me just quickly go through the agenda. I mean, not the agenda, the list of attendees. Joe Sherman, you there? Yes, I am. All right, Stanley, you there? Yep. Glenn Block. Yep. Evan, I don't know your last name, unfortunately. Anderson? Anderson, thank you. And what about David Baldwin? Here. Thank you. Is there anybody, Anderson? Is there anybody I missed on the list of attendees? Mark, I know you're there, I gotcha. And Alex, you're gonna be else I missed? I think you've got me on there a bit higher up, yeah. Yeah, I gotcha, yep. All right, anybody else that did not get on the attendee list? All right, great, thank you. Final reminder, please add your name to the serverless group, Google Doc, face to face, so we know who's gonna be coming and whether we're gonna have quorum or not. As of now we've got what? Five people. 15th of June, this is the last day of DockerCon. Do you know what time you're looking at for that at the moment? I'm assuming it's an all day face to face, to be honest. All day face to face with conflicts with that event. Yeah, but DockerCon is just repeat that day. I thought it might be an hour or two hours or something like that. So let's, okay, we got a minute or so, let's talk about this. What were people thinking? I was assuming all day, but maybe I'm wrong. What were other people thinking? Yeah, I think we need all day, I think so. Yeah, I think it's too much. There will be people that have commitments at DockerCon that can't be there all day. Right, so they may either not show up at all or make a part of it and put down in the list of a planning to attend people, but then whether you can make it for all day or just part of the day, that'll help us decide whether we have good. Yeah, maybe we should have said that after we have an agenda, right? We don't have an agenda yet. Not yet, that's true. Other than open PRs and open issues, yeah. I'm gonna fly in for just for that. So if it's all day, I would prefer if we would maximize the time. Yep. Hey Doug, I'm Michael Payne here. I'm not a voting member, but I'd like to attend. Is that okay? I can put my name down. Of course, of course, of course, of course. This is for everybody. Where's the doc? It's listed in the agenda. I'll make it more clear. I'll put it someplace else, but right here there's a signup doc. I'll add it to the top of the agenda doc someplace. All right. All right, so cool. Thank you guys very much. We'll talk again next week. Bye-bye. Thank you everyone. Bye. Bye-bye everyone. Thanks Doug. Yep. Bye.