 Hey, what glad you're there around mute if you are. Hey Doug. Good morning. How's it going? Fine. I got a new audio setup and I'm trying to see how this performs and I'm Clicking the wrong buttons That's okay Do-do-do I can't remember. Where are you located? Bucharest, Romania. Oh, that's right. Yeah. Hey David. Hey, how's it going? How's it going? Well, that is good How's your day going, Doug? No, it's okay Looking forward to the weekend though You're not in California with the fires, are you? No, I'm on the East Coast, North Carolina Yeah, actually it's a It's cool and awful out here. So it's nice As somebody that's been living with like the AC turned on to max, I am jealous Every now and then or last week or so It's kind of really hot and humid to the point where like all the windows in my house start fogging up, which is really a weird experience Hey, Tommy. A little bit hot Small group today so far. Hey Christian Morning, Doug. Hey, Eric Hi, Doug. How's it going? Pretty decent morning. How are you? Pretty good. Good. Actual sunlight. You must be in California. Yes. Yes. It was It felt like, I don't know, it felt like it was sundown all of yesterday the entirety of yesterday Are you in Northern California or Southern California? Northern. Northern, okay. Yeah Interesting Hey, Timmer. Hey, Doug. How are you? Good. It's Linky. Hello and Mr. Mark Hi, Doug. Hello and Klaus. Hi, Doug. Hello, Anish. Hey, Doug. Hello And Scott Thomas. Hello. Did I spell your name right? Perfect. There we go. Yeah, it's a miracle Brian, are you there? Hello. Hello, Kristoff. Hi. Hello, and well, are you there? And well Yes, hello. Hello, and Lance. Hello. Alright, I think I had everybody so far. I was actually hoping Clemens would show up because I wanted, apparently he was on the Protobuf call and I was hoping to get an update from him. Let me ping him. Daniel, are you there? Hello. Hello. I'm Ed. I'm here. Hello. Alright, one more minute until we get started. I actually have a very short agenda today. Alright, let's go ahead and get this thing started. Alright, community time. Anything from the community people want to bring up? I actually have a question. This might have come up earlier already and I might have missed it. What about WebSocket support? Any thoughts from anyone on the call or from you, Doug? I'm trying to remember if this has come up before. I want to say it has, but for the life of Nakim or what we said about it. Does anybody remember or anybody have any comments? Yes. The last time we talked about it, there was an agreement to work on it. It was me, Scott, and Lance, and who else? If it's called down, you should have seen the previous meeting minutes. Oh, that's right. We did it, didn't we? Where is it? I think it was two weeks ago, maybe. Oh, okay. I might have missed that. Now that you mentioned that, did I do remember something about that? Oh, there we go. This one, this one. Perfect. Is there any update on that in terms of status or is it still just sort of in the backlog? I think it's still in the backlog. Okay. Whoops. Alright, cool. Thank you, Thomas, for the reminder. Actually, I should add that as a reminder. I didn't know it was a reminder, though. No, no, this is good. We should add that to the, to the proposal for the reminders. Scott, Clemens, and Lance. Cool. I like reminders. Alright, anything else from community time? Anish. Can I get a reference to this backlog? I don't know. I would like to have a look at that if you don't mind. All we have is, let's see what day was that. This was on the, come on. On the 13th of last month. We just have this in the notes. We should make an issue. That would be nice. Slinky, you want to make an issue so everybody can comment on it? Sure. Thank you, sir. I appreciate that. Cool. Thank you. Alright, anything else? Alright, moving forward. Not too surprising. No one's volunteered yet for KubeCon North America. Clemens, since I see you joined now, what do you think about both of us signing up? And we just said in the same video we did for the previous ones. I think that's a brilliant idea. I like it. Okay. But that means that we're both then signed up to answer questions live. You okay with that? What day is that? Oh, November 17, 20. Yeah. Yeah, that seems like, I'm on a vacation on those days. That's important. Yeah. Okay. So if it doesn't work for either one of you and you need somebody to fill in, you can add my name. This is David. I'm going to need some catch up though beforehand. So you can leave me as tentative or you can, if you need me to back up, that's fine. Okay, cool. Thank you. I kind of assumed if for some reason there was a conflict, I'm sure somebody else could fill in. Right. I remember you guys were talking about from Europe that you had nobody on. It should be a little bit. Yeah. Yes. It became very easy for us. Thanks. All right. Cool. Thank you. Let me make a note here. So I will send in the thing now. Okay. Timer, since you're on, did you get an invite or a questionnaire that you're supposed to fill out since you're a sandbox project, or do you need to steal the serverless working groups slot? No, I did not. But I don't know. I don't want to steal anything. Well, to be honest, we get two, right? One for cloud events, one for serverless. Obviously the cloud events one, we can talk about everything we're doing here. We already had the video for that. Unless someone can think of a specific topic to cover the serverless working group thing, I'm inclined to let you guys steal it for your workflow stuff. Sure. Yeah, that would be great. Yeah. That's okay with you guys, of course. Yeah. Does anybody else have a topic that they think should be brought up for the serverless working group session? Okay. So when we plan on you stealing that time, we'll talk offline about getting an abstract or something put together. I'm assuming we can just copy what we did from the previous one. But yeah, we'll talk offline about that. Sounds good. Thank you. Perfect. All right. I think that's good enough. All right. SDK working group. I think we did have a call last week, but can everybody remember anything worth mentioning for the rest of the group? Okay. I can't remember. Yeah, actually I guess I should click on the meeting minutes and see. Oh, yeah. It was just the announcement from Slinky, right? So that was it. Cloud events SDK 2.0. Master on twos out there. All right. We do have a call technically right out to this one. Talk with discovery interrupt. And then we'll see what happens. All right. Timer anything you want to bring up from the server from the workflow stuff to. Data's been a quiet week. Well, I think on Monday in our meeting, we would just vote on a logo proposed by CNCF. And then that's going to be enough work to update everything. To use it. And that's it. Thank you. All right. Any questions for Timer? All right. Before we jump into the PRs and issues, any topics. People want to bring up that I should have added to the agenda. I forgot. Okay. In that case, let's jump into. These two PRs. Now, technically these were opened up. Four days ago. So we can't technically vote on them, but I'm inclined to let people. Have more time to review them. I think we'd need a little more thinking, but I did want to at least introduce them here on this call. This one. Is about trying to figure out what's going to happen. This one. I did want to at least introduce them here on this call. This one. Is about trying to fill out some more details around our rest API. Most of it is around things like. Making it clear. Oh. Rather than talking about a generic API section. I specifically. Made it all about rest, basically. In each GP. I figured we can add other protocols later on, but I wanted to get the rest one out there. And whether we pulled this out into a separate spec or keep it here. We can discuss later. But I wanted to fill out specific details. So for example, make it clear that this is doing a get on the services. Talked about. Where is it? In particular for the get for the API. I want to then talk about all the specific return codes that we specifically call out. Now that doesn't mean that people can't. Use other ACE response codes if they're appropriate. But these are the ones that the spec. Sort of mandates. Based on particular situations. Okay. I left it open to do because I figured that's actually a little bit bigger is how do we want to handle errors, right? We wanted to find a standardized response. Jason for errors and stuff like that. We probably do. But I didn't want to overload this PR too much. So I thought that as a to do for later on. The biggest thing that I want people to think about as they read this is. Especially on the puts. Where is it? Yeah, I guess. Okay. So for here. I did have the notion of doing a create. Both to the services end point. And to an endpoint that actually has. Where is it? Where is it? Actually has an ID in it. Okay. This one is, is. Oh, that's a. Delete. I apologize. Scroll too much. Okay. Or you can do a put to a specific. ID. This one is more focused on the import case. Where you don't want to create a new ID. On a fly. Okay. However, in both cases. I do talk about returning a two or two accepted, meaning. The, the endpoint wants to do an asynchronous update or create. And they can't return something quickly enough before you can start getting. So I had this whole big. Rambling stuff in here about how to handle asynchronous updates in particular. It gets really interesting when you're worrying about cases where it fails. Right. Because you have to have a way for them for the client to be able to query some endpoint to get the failure status. So take a look at that in there. Because I do have a whole. Bunch of semantics around how to process those. I don't want to say bore you guys on the call here right now. But please take a look at that. See what you think. I try not to make it too funky. Let me go and pause there for a sec. Since Scott, you raised your hand. Yeah, I'm a bit confused. I'll definitely go and review the PR, but why would we. Put crud on. The discovery API. That's an excellent question. I assumed that when we start talking about supporting. The aggregation scenarios that we'd want to have some standard way to share the metadata between it, between the discovery endpoints. I was assuming that discovery aggregation would be poll only. No push. Interesting. Yeah. I was assuming potentially both. Cause if you, so if you only support poll, then that, that resource PR that you made to get the resource version. Makes a lot of sense. For simplicity. I wonder if we go. We go first version with no crud just poll. I thought about that. You think about it. What other people think. No comment at all. The second. I don't know. It couldn't mean you love Scott's idea. It could mean you hate it. I don't know. I'm generally sympathetic to Scott's ideas. But. What did we lose? I don't know. I just have this weird feeling that, that there's going to be cases where people say, you know what, I need to, I need to upload something into this. I don't have another endpoint for you to pull. I just need to upload some data. I need to update some, you know, somewhere endpoint or new services. So what if there is no, nothing for you to pull. I wonder if it's, it's more of a case where we could support an endpoint that says, yo, if you're an aggregator, here is my endpoint to pull from. Sure. But what if, okay, what if he's behind a firewall? Or what if you decided to say an aggregation case? What if it's just, I just need to populate the discovery end point. And I would like to do it in a interoperable way so that I can just, I could populate lots of discovery end points. Well, like, like an IOT case or something. Yeah. Does that have to be part of this API? Cause populating the discovery API is not necessarily the discovery API or even what clients are concerned about. True. I don't want to have an objection to splitting it out to a separate spec. That is true. My first, like my first hot take here is if we support crowd operations on the discovery endpoint, we also need to support some sort of identity to understand if the, the entity pushing discovery data into some discovery endpoint is actually an authority on what it's pushing. Well, the entire discussion about security, whether you do push or pull needs to be talked about at some point. Or at least in the poll case, you understand who you're trying to connect to. You don't understand who's connecting to you. If you're accepting pushes. True. Yeah. And to be honest, my assumption was that there, there would need to be some sort of security mechanism in place. And whether we talk about that or say, Hey, it's up to you to decide that because there are lots of different mechanisms out there. That's up for you to decide. I just kind of figured we'd be talking about that later, to be honest. In the IOT case, would it be bad if we gave a discovery API and each IOT thing had to implement their own push protocol. Each their own protocol is not what I'm, what I'm here. So it would be bad. Okay. Yes. If that's what that means. Yes. So class, your hand is up. Yes. So I could imagine that not in all cases, you would like to have this crowd parts implemented because some cases you, you might have very special way to, that you have already existing content and you just translated and make it discoverable the standard way. The way how you, you get to this content might be the proprietary, for example. Yeah. So actually, I guess what I should do is somewhere up here. Yeah. This sentence right here, I think needs to be changed. And now you guys are talking, right? Because not everybody may support all the, all the crowd API. So that sentence probably needs to go. I also have to review it in detail. Yeah. Yeah. Yeah. I would, I would really distinguish between the, the, I mean, you can all map them into one into one namespace. I would, I shall say, but I think of those as distinct interfaces. There's a, there's a management interface and then there's a discovery interface. And, and I'm not sure that they are, I'm not sure that they are sitting in the same, in the same path segments. Just to poke on that a little, just from the simplicity point of view, why would you not? Because all, because I believe that those are also different security arounds. But can't you secure, put post versus get differently? Yes, you can. But it's simpler to, to, to secure things at the interface level rather than at the, at the, at the method level. Okay. I want to talk about that. I don't, I don't have a huge objection. I was just going to poke on a little. Okay. All right. Well, as I said, I don't think it's going to, it's going to change. It's going to be dramatic change. It's just that there's some, some, some more headlines and probably it's not all under services, but we'll have to go. Right. Like, might be that, you know, slash management services or something like that. Yeah, that's not a big deal. That's fine. You can talk about that. Okay. So as I said, I didn't want to try to force this through. I think people need to look at this fairly carefully. So please do get a chance to review it. As I said, I think, I think the async stuff is necessary, even though it does feel a little bit complicated, but I tried to keep it as simple as possible. But please, you know, look at that kind of carefully. There's a, I was just looking, looking, there is another trick. And I don't remember talking about this trick in this forum. I just don't, I forget what that was, what content was. And that is, if you give a, if you give a 302 or 303, instead of the 202, you can do, we can return to retry after and basically tell the client when to come back to, to, so basically someone, issues a request to you. And then you say, you basically say, yeah, you have come to the right place, 302, but I just can't do that yet. So you give them the location header and the retry after is, you tell them, come back in five seconds. And then, but they do get to the location header, right? Yes, they come through location header and many HTTP clients actually resolved this under the covers. So that they will go and sit there and wait, and they will go and re-request, and then they can go and pick up the result automatically without you having to program for it. Interesting. So it sounds like it's almost the exact same things what I do here, except instead of a 202 with the 302. Yeah, and it's, it's either, it's either, it's either a 302 or 303, I forget which one it is, and I'm, and I'm bad reading, reading along right now. Okay. I'll make, I'll take a look at that. Oh, 302 or 307. Oh yeah, 307, it's 307. So you can do a 307, so you can do a 307 with a location header and a retry after, and that mechanism is, is supposed to be picked up by the client. And for the client and to do the request again, to that location, but after waiting the retry after period, and that, that can then be used for asynchronous processing. Of course, if you have a giant, if you, this kind of semi breaks down, if you were doing a giant submission of a document, but for simple requests, that's something that is doable. Oh, that's interesting. So, so with the 302, he doesn't do a get to that URL. He does another put or a post. Yes. So you can, so you can actually do a, there's another detail there in that you can, you can basically preempt the client from sending the entire request to you, by coming back with the, with the, this is the 100 continuous thing that is also in HTTP. So there's all kinds of nuances where you can go and trick, which the browsers use where you can go and trick the, the client to into some, into some behavior that then in the end becomes asynchronous. So that's something that's something that's extra trickery that if we want to go and do trickery, we can go and use the HTTP trickery. I like the fact that you keep using the trickery because that's what it feels like to me. And I'm trying to figure out that's really, really cool or really, really scary. Yeah, it is, but it's like there is, there's unbored capabilities that the HTTP clients must implement by the rules of HTTP. So that's, that's where, and you're kind of starting to tread into territory where you are, you are now stepping into those mechanisms. Yeah. Okay. Something for me to think about. Thank you. This would also be something like the 307 trick with retry after. If anybody still has some interns, this would be a good thing to go and validate. Okay, cool. I know that it works, I know that it works with, with, with some clients, but I don't know what the, what the overall compliance is of, of clients, like, I don't know what go lands has that, or I mean, that's Google, so it needs to be right, but who knows. Yeah. Okay. Well, cool. Thank you, Clemens. Anybody else have any high level comments? Otherwise, I'll let you folks read this offline. Okay. And thank you for the comments you guys gave already. Appreciate that. Next one, the PR. Okay. So this one was based upon last week's discussion. We're talking about needing a version attributes. And to be clear, this version attribute is the version of the discovery and point metadata for this particular service. It is not related to the version of the service itself. Okay. And as I was writing this up, I believe originally I wrote the version field. And I call that disk version or discovery version, just a couple of reasons. One is I, I didn't just call it a version because I didn't want people to get confused about what it's the version of, right? So for example, we have spec version down here, meaning it's cloud event version. This version here makes it perfectly clear at least to me that this was the discovery version. I don't know if you don't like the word disk, we can call it discovery if you want. But anyway, the point here is it starts at one and on each update, it gets incremented by one each time. So it just continually goes up relatively straightforward. Now, after I wrote that though, I don't know why, but I started thinking about it and realizing, you know, it might be really nice to know when the resource itself was actually updated. And so I thought I started contemplating whether I should open PR to have a timestamp, you know, an updated timestamp in there. And that's when I realized, well, wait a minute, we can use the timestamp, the updated timestamp to convey the exact same information. As long as the timestamp continually goes up each time and it gets updated every single time the resource is changed, then the exact value here on the discovery version technically doesn't matter. All that matters is that it's changes and you can do a simple string compare or some sort of compare to say, okay, one's bigger than the other. Therefore, the bigger one is always newer, right? So that's what I thought. Well, maybe we can do double duty and say, I have just one field called updated and you can use a timestamp to determine that comparison check as well as get other information, meaning when it was actually updated in case that's information that's useful to you. So I put both in this PR. Obviously we're not going to, I don't think we should accept both and I guess we could, but I was wondering if people had a preference for one way or the other or just wanted to sort of open it up there for discussion. I've seen clients come with really weird coxing problems. Of like cox that are like a month off. So I don't remember if I put this in there or not, but I could have sworn that I put it something that said, Oh, there's a typo there. It said that the time always has to go up. And I thought about it this morning that I need to add some personal statements here that says, exactly because of the Cox Q problem that the server side is required to guarantee that every single time it gets updated, the time value always goes up by one, because it is technically possible that if you had this thing, you know, sharded out across different back end servers, that one server is really, really slow or really, really behind for some reason and that's the one who's going to do an update. So it's possible he could try to update with a older time by trying to take his current time. But I think we need to add logic that says, no, when you do the update, you need to make sure that it's always incrementing, even if you have to lie about your current time. Right. So I think we, I think they need to add that logic in there. And so in cases where, where you're doing some sort of import action that. No, actually on an import action, you should take whatever value is given to you. Period. So, so the, I think, I think where you're getting to is that this is not a version. This is an epoch. Yes. Well, it's, it's an incremental version that has no semantic versioning except for you can compare it. The, the older one or then the most recent one is a bigger number. How you come to that number is up to you. Yeah. I think it's actually true for both of them. And from that perspective, yes. Yeah, the, the, we use, we use the epoch term for this in a few places. Where it's really, this is an incrementing number. And once you, once you increment the number, then it's the next, it's, it's the, the next version effectively. And by what you increment the number is not important. I just need to be higher. Now, when you say that Clemens, you're talking about the discovery version or the update field or both? I think you're proposing to merge it to. Well, I was, yeah, I'm leaning more towards getting rid of discovery version and just have update and have it serve both purposes, but I don't know whether that's being too tricky. It seems weird. To just have updated sit there and for update, which is usually informational fell a field. To now play double duty. I just let it sit there in innocently being called updated. And then being overloaded with semantics where the unsuspecting might not even care to read into the spec. That seems a little dangerous. And that's only because updated is so obvious. It seems to be so obvious what that is, that nobody might care to go and read the particular line and then find out, Oh, this is actually a version. Right. If you call this epoch, then it's weird enough for, for people to understand. Okay. So this is where I need to go and put the, put the value for when this was updated, but also has some extra meaning. So this is where, where the term epoch has, like we, we specifically use the term epoch because we wanted to create a term that is, I want to pick a term that's a little weird so that someone would actually be forced to go into the, into the box to figure it out. What other people think. Is the intention that this discovery API is sorry. Is this update time based on when this particular API updated this record or when the producer, the originator of the record changed it. Say that one more time. Sorry. Right. So like, let's, let's take the, the aggregation case. And the, there's a B and C aggregating and we're looking at C and a changes it. Does that field get updated at a and B and C. Because like the intention was that the, so if a adds a record and then sets the epoch version or whatever it, that should be the same version that propagates throughout the, the chain. But you might want to know when C actually updated the record based on A's values. So I could see a case where updated would be the time that this particular endpoint saw this particular update. And the epoch would be when this record was generated. The original intention was that the updated value would, would be more like epoch in the sense that it's, if you're doing an import, it, you import the updated timestamp as well. It is not when this particular instance of this particular discovery endpoint saw an update. I would, I would consider these records mutable and it's actually created. So this particular thing at this particular epoch has been created at this particular time. So it sounds sounds like you're saying we actually may want to do both have some sort of updated field to that is specific to this particular discovery endpoint and have some other field, whether it's disk version or epoch to represent the version number or the version. Yeah, version number thingy. I care less about the update time to be honest. Okay. So, well, so if, if. So, okay, let me ask you this, Scott, if we, if we, that might make sense. But if we, if we just went with that, with, with epoch, would that, and if we went with the epoch and did not call it updated, is that clear to you that it's meant to be static when it or non-chain, it's meant to be sort of an imported value. Or even if it's called epoch, do you think it should be specific to this particular discovery endpoint? When I see epoch in an API, I consider that kind of the, the birth date of this record. So Clemens, since you're the one that mentioned it, is epoch meant to be the creation time or to be also used for updated time? It is the update time. It is, it is the, the, the, it is effectively a version counter. Yeah. I meant created in the sense of immutable records. So, you know, that each update is a new creative, a new version of that thing. Yeah. It's a new epoch. It's a new epoch of validity for this. The fact that it's updated doesn't, doesn't really mean anything to me because I might have missed updates in the middle. So I don't actually care about updates at all. I just care about what's the current truth that I've seen. Okay. So you're looking at the current epoch of this, this record, that's the, that's the, that's the notion. Yeah. But I think it's interesting though, Scott, is it seems to me, whether you actually are updating a record or you're creating a new record. That's another version of this discovery or this discovery service. That almost feels like an implementation choice. Right. I think what you're wanting Doug is some sort of top level. The last time I was synchronized with all of the downstream producers is this time to see if the, the API is actually stale and you can trust the epoch. I'm not sure that's what I'm looking for. I, I, to be honest, all I was looking for was, I think the same thing you were, which was a number or a state value that we can do a simple compare against to see which one's bigger or newer. That's it. Okay. Right. Yeah. So the fact that I, that I overloaded updated, I thought it was an interesting trick, but as, as Clemens pointed out, it might be a little too tricky and could lead to confusion. So I'm okay with dropping updated. And on my mind is now in the phase of, okay, do we want it to be just a number, like just this version, or do we want it to be more like a time stamp like epoch? But I think it's the same value. I think it's the value serves the same purpose in both cases. Yeah. I mean, like something like a revs might work too. Yes. Rev the discovery version. Either one. Yeah. That's just, you know, pick the right word. Right. So. Correct. Clemens epoch is a time stamp, right? Yes. Okay. So do people have a preference, whether we go with some sort of numerical counter. So to be correct, it can be anything that is that it has a greater, a greater equal less. From a string comparison or numeric. It doesn't even matter. I mean, we just need to have a, we need to have a clear rule. Like it needs to be bigger than. Interesting. So for, I pasted the, the links to the NCP event streaming spec. That we currently have in our committee and we use this for negotiating partition ownership on, on event stream engines. And there it's, it's an integer that just increments. So, okay. What did people think? Do you want a timestamp or do you want just a monotomically increasing integer? We can read about the name later. I think we, maybe we just set the rule loose and say this must be ever increasing. And if you choose to use an epoch value or you choose to use an increment, that's fine. It adheres to the spec. We could, if we did that though, would we at least need to scope it down to integer versus string or leave it loose? I feel like there'd be some people that were going to want to nail down. I think it needs to be an integer. It has to be a fast compare thing. Okay. So anybody disagree with it being an integer? Let me figure out whether it's just time integer versus a numeric integer. It could indeed be a unix epoch. And so you have both. Both. You mean the number of seconds since 1970. Yeah. Yeah. Okay. Yeah. Right. Right. Turns out it has the same term. It has to say, use the same term. Right. Okay. Not entirely back. Okay. So, okay. I think the proposal then in front of us is. Don't necessarily the name yet, but it'll be an integer. And we're not going to require it go up by one each time. It just as long as the new version is greater than the old version. That's not right so far. Yeah. So it doesn't have to go up by one. So this, my thinking is that it's similar to the resource version of Kubernetes and that's atomic across all resources in a CD. So it goes up by one, but it's a global counter and every resource might jump. Up by hundreds based on how many resources are inside the cluster. Right. Yep. Okay. Any ejection with heading that direction. This isn't a formal vote. I just want to make sure no one's can think of some major objection to heading this direction and making an integer. And it just as long as it goes up each time. It's up to you to decide how much. Okay. And what about the name? People like epoch. People want some variation of version or revision or something like that. Any preference? I'm pretty keen on epoch. What was, what was that Clemens? I said, wow, I'm excited. Okay. So, so if it's an integer, then should it really be an epoch, which is a time, but you, the Unix epoch is an integer. Yeah, but the, the, um, I think Mark's question is if you call it epoch, you assume that you can turn it back into a time based on some known start time. Right. I'm not sure that's true. Like, like for us in the NGP example that I just posted, that's not the case. But did they do it wrong? I wrote it. So it has to be right. Okay. Okay. So you just admitted your biases. Yeah, but, but some other people, some other people in some other form already been object to it. Yeah. You can always be converted to a time. We just don't know the period and we don't know the start time. We updated it back in 1971. I think the comment that, you know, you can arbitrarily, uh, increment it as you, as you will, doesn't give you any boundary or boundaries there. No, it doesn't. But it's, it could also be not, not by the wall clock. Yes. Yes. This should be a long. Okay. I will do some double checking on whether we're allowed to do this. This Clemens hack of making it just any random number. Um, if we call it epoch, but aside from that assuming Clemens isn't lying to us. Um, anybody have any objection with epoch? Thank you. It's okay. How about serial? Anybody want to comment on serial? I don't know what serial means though. Um, just from the DNS, I think they have the up going serial. It's arbitrary number should go up. What do people think? Come on, someone speak up. Scott. Yeah. I'm thinking about it. I think serial, I don't have any strong objections, but I also, I wouldn't think of this kind of like version. Version in history kind of time thing. Yeah. Yeah. Makes me think of your collecting something. I mean, this has a very particular function. Yeah. We just want to sort all records that match the ID and pick the top one. Yeah. I'm leaving towards the epoch just because it sounds like we're so, we're just so smart. It's just a fancy word. The other ones. The other ones just have other possible meanings. And that was like, it's a reiterate what I said. It's like, what is, since, since we are binding particular semantics to this. Um, the, the, any, any term which you look at and they're like, ah, that's, it's obvious what that is. Um, like serial, um, depending on where you come from, uh, you will have some, some, some idea of what you think that is and that's so true. I don't think it's true with the bar. Okay. So I think this is the current proposal, at least that's what I'm leaning towards. And since it's my PR, I can do whatever I want. And thank you guys get voted up or down. Anise your hands up. Just a quick question. So when we, it doesn't matter who, what's, what's the type of this value, who increments it? Is it the client who updates the API or is it the server who listens to the update or create operation? So I do in here talk about two different cases. One is the generic case of just someone's just doing an update. Um, and that would be this, the server side would be doing the increment of this value. The, the exception to that is if you're doing some sort of import type of operation. In that particular case, the client who's doing the importing would be passing in the value and the server would just pick it up and not change it. I do talk, I do talk about those two different cases in there at some place. Okay. That means we do need the concept of a kind of kind of locking mechanism on this value. Potentially, but I think that's an implication detail. I think client is a funny choice of word there. It's more of the, the exporter. The client would be consuming the, the discovery API to understand what producers are producing. And there's some other worker thing that's helping shuffle. It's sort of source of truth into a discovery for aggregation. Yeah. I'll try to go through and see if I can use a different word besides client because you're right. It could be misleading. So let me just remind myself. Okay. All right. Well, thank you everybody for the conversations or the thoughts. I will update the PR, but please, um, I'm going to look at the general text. I don't think, I don't think the spirit of what I wrote in your changes based upon today's conversation. So please look over the rest of it. I'll try to update it at some point today to match what we just talked about though. All right. Clemens, Jim said he wasn't going to make it today, but he said you might have an update on what's been going on with the protocol stuff. Yeah, I do. But I need to make that super short because I need to rush out. So, um, So yeah, we had a meeting. Um, yesterday, if I recall, right. Um, days have no meaning. Um, and so this was largely about, this was rehashing the discussion we had around, do we need a bag or don't we need the bag? And do we need namespacing and not namespacing? Um, so in the discussion, I largely went and went back to the history of discussions that we had that some folks in this in that discussion have not had, um, the, uh, context on where we let, why we landed with the flat structure rather than having the extensions bag and while we, why we then ended up having, um, flat, uh, values rather than allowing bags, um, in, in attributes. Um, all related to all the various different mapping problems we had with headers, et cetera. So I explained the historical context on this and, and ultimately, uh, the, the result was that folks were okay with the, um, with the PR, um, as it stands right now. So the objections were basically, um, really the same arguments we all went through with, um, what is, what, what is with collision risks of extensions? Um, would there ever be a case that a, an extension would be promoted into the main spec and there are pointed to the extensions that we already have in our repo. And if it turns out that everybody is using, um, we're using the sequence number, um, everywhere, then it's possible that that might end up being promoted into the main spec. And if that's so, then we don't want to break everybody's code. Um, so, um, that, you know, you can continue to use this, this extension and you don't have to go and rewrite your code to use the newly christened official embedded extension. Um, so that we keep up with ability. So in fact, all the arguments that we went through, um, we went through again because they are somewhat difficult to fit into, um, the, the protostructure. Um, and so the, um, um, the result is that they are, um, the, the produce a spec and we also went through the, the special, um, uh, you know, dual nature of the Jason data field that if the outer event is Jason and then the data that's inside of it, there's also Jason that it can really be Jason object. Um, versus in the common case, other common cases where there is a, um, um, there's a binary payload and that's clear. Um, I also explained how we did the trick with data underscore base 64. Um, how we effectively use the underscore base 64 as an attribute indicating that this is binary. Um, and the, and this is, if this is reflected, that duality is really reflected in the, in the protobuf proposal, um, by having a binary and then having a string that takes the same role as effectively a single Jason string that can also be used for, um, a text based, um, uh, a text based, uh, uh, content type. Um, so they have this duality of binary and a string and then they also allow similar to, to Jason, um, a bunch of, uh, uh, key value pair. Um, so that kind of parallels really from a, from a, from that perspective, the, um, the, the, um, the, what the Jason spec also does. So, so that's, that's, that's a parallel. So ultimately, uh, I think what we went through is, um, um, some unease, um, from the folks who raised the objections. Um, but they now better understand why, where we, where we were coming from and where, why those decisions were made. And we ended up in a place where, um, the, the, um, the proposals as it stands, uh, um, has no further objections. So you actually think that they're going to just say the PR is ready to go as is. Uh, that was, that was my understanding, um, from coming out of the discussion. Yes. Cool. Okay. Any questions for Clemens? Okay. I'd be interested in hearing whether Evan agrees with that. Cause I know he didn't make the phone call. Okay. All right. In that case, you're free to go. Thank you. I appreciate it. Thank you very much. Next week. All right. With that, we're at the end of the agenda. Are there any other topics people would like to bring up? Okay. In that case, before we jump over to the interop call, um, Uncle, are you there? Yes. Yes. I'm here. And move it. No, I think they're gone. Okay. Did I miss anybody for the, uh, for the attendee list? Okay. In that case. If you're not interested in the discovery, interrupt part of the call, you are free to drop and thank you all for joining. I'll talk next week. We'll just give everybody a minute to. Bail if they want to. This actually may be a really quick call. I know Scott. Um, he, he took off. Probably went over to the K native steering committee call. And I don't know. Anybody's done any work on this one yet. Just 30 seconds or so. All right. Yeah. Let's ask the, uh, hi order questionnaire. Has anybody had actually, has anybody had any time to actually do any real coding on this? And wants to share any information. I feel, I feel bad too. Cause I've been meaning to do it. Um, Cause it seems to me until people actually sit down and start to implement this stuff, we probably don't have a whole lot to talk about. Um, but that, and then, but to be honest, that is one of the reasons that I put together in my PR about the rest API's because that was sort of a, I figured that was a precursor to us being able to code this thing up. To get agreement on what, you know, if the return code is going to be and stuff like that. So I kind of indirectly worked on this a little. Um, but if no one has anything specific to talk about, we don't necessarily need to hang on the call. Are there any topics people want to bring up or is it just a matter of we just need to all find the time to. To start working on this. Uh, yeah, I need to talk about something. So which should we start discussing the school for the subscription API? Because that's something which is particularly of our interest and I would like to at least start in that area. I'm trying to think for the subscription API. Well, in order for you guys to head down the path of doing something in that space, do you plan on using the discovery spec first? Or just, you're just interested in what the subscribe looks like. I mean, we would rather fit it eventually as a part of the ecosystem, but we would definitely would like to have some scope on the subscription manager beforehand. Okay. Honestly, I don't think anybody's taken this first step of writing a proposal. If you're interested, you could put together a proposal for what the subscription API looks like for that. Actually, hold on a second, check something here. Because what's interesting about that is. Whatever what. Clients had, what the subscription API folks had. For rest in here, hold on a sec. It's also an open API spec. But is the opening pay spec sufficient. To describe how to do a subscribe. But I think this is a bit more descriptive than that. I mean, we can definitely go with the opening pay. It's the same thing that why do we choose open API spec versus a discovery of the aspect. I believe that that justification holds true for the subscription spec as well. I don't know what that's just an opinion. Well, do you want to take a first pass at, just at writing up what. In ACP subscribe would look like. Mm hmm. Sure. Yeah, I think, I think that'd be a great first step because, because it seems like if nothing else that would be. Useful input into the creation of. Or not the creation, but the, um. Uh, what's going to go into the subscription API for HP, HTTP, right? Mm hmm. Yeah, I mean, does anybody else think that'd be premature to be looking at that? Klaus. I mean, it's fine. I just want to. Yeah, I mean. It's like for the discovery API, someone has to try it out. Um, yeah. And if you want to take a first pass, I'd go for it. Yeah, okay. I just wanted to think from the scope perspective that what are we trying to solve with the subscription specification as well? And what are the key responsibilities of subscription manager as a concept? So if we have like some sort of scope definition on these things, then then it would be a really easier thing to, you know, create a proposal or, you know. Yeah. I'm trying to see whether I can, to be honest, it's been so long since I read this document and try to see whether you've mentioned that in here. I think that's the question. So from the subscription manager point of view, like, how are we, how are we going to start implementing this? So even if we start implementing this, is there a mechanism that we can dynamically pick these implementations of XYZ subscription managers? For example, if we switch between different messaging systems like Kafka, and QP, do we create relevant subscription managers for every messaging implementation? How does it work? So I would like to outline these kinds of details before we start implementing that. It's just an API definition. I mean, what you do behind the scenes is up to you. I guess we have defined the protocol specific additional parameters for those different protocols. HTTP and QP and so on. And how does this API actually compares to the subscription API of Knative eventing? So Knative eventing also has its own implementation of subscription API. So is it eventually going to conform to this? Where do you see the Knative subscription API? The messaging API group has a subscription API, right? So that's very specific to the Knative channels, but still it's subscription API. Yeah, I think it would be closer to the triggers in Knative, I think, but trigger broker. Knative is only HTTP. So here are all the protocol specifics. It's just a bit confusing because they have also a subscription API which is part of the messaging API group and broker trigger part of the eventing API group. And then now we have this generic subscription API definition. So it's just like, that's why I wanted a bit overview about the scope of this. Yes, you're right. I mean, the subscription in Knative is just a messaging level. So it's not even assuming cloud events as far as I know. Yep, exactly. Here, obviously these are cloud events subscription. So I think there's also a section describing the possible filters. So far we just have a very basic filter dialect. So I would assume that there are some additional workers needed. Also something I, whenever I find the time for it, I wanted to do some more work on. So far we just have an exact measure of prefix suffix filters for the cloud events attributes. Doug, what about you? How do you see this taking shape? I'm still trying to wrap my head around the subscription API in Knative because I don't, I'm probably thinking about this wrong, but I don't think that it's as a subscription API. Yeah, it's basically a subscription towards the channel there. It pops up. It pops up. You also have a subscription term. It's just, exactly. Yeah, but even then, like I said, I'm probably thinking of it wrong, right? To me, it's just, it's not quote in API. It's you create a resource in Kubernetes. It's an API. It is. I know. I know. It's an API. Well, what's interesting to me is yes, the Kubernetes has an API, but you're not really doing a subscription per se. You're creating a resource. And I understand I'm probably thinking about it wrong because, because semantically, yes, you are creating a subscription. It's just, it's so difficult for me to think of the Kubernetes API as a subscription API. But isn't it the same here? I mean, you also create something that has an ID, a subscription ID. I know. I know. It's a mental block I have. I agree. I'm thinking about this wrong. It's just, I couldn't get past that for some reason in my head. I mean, this is exactly why I would like to start in, at least in early discussion that what should be the scope of this. And because once we have a clear picture about the scope of subscription API and relevant managers, then we can start thinking about certain implementations. So I think an important distinction in this document is this push style and pull style part that is described, I don't know, it's somewhere here or a bit above. I'm not sure. So mentioning that quite a few protocols like MQTT, MQP and so on, have some native subscription support already. And, but that's if you do pull style, even for MQP, you could still have push style subscriptions when you have to provide the address to push an event to when you subscribe. So the subscription manager has always needed if you, if you have an out of band subscription API. Let me ask you guys a slightly different question. That's maybe why I'm struggling with thinking of the kind of stuff as a subscription API. Are you guys assuming that when we talk, that when we actually formalize the API for subscription, that it's going to look more of an RPC-ish kind of a thing, or is it going to look more like Kubernetes where you're creating an object? I mean, that completely is now an implementation detail, right? Because how are we going to implement this API? It's based on what platform we choose, right? The subscription needs some kind of life cycle. Well, it's an RPC style or a REST style API. You will still have something like a create and delete. Yeah, but I guess it's subscribed unsubscribed. Yeah, but I guess it's, it's again, it's probably just the way I choose to think of it. It's just to me. Well, never mind. I just need to get over it. It's just, it's very difficult for me to look at the, the REST style as it's just, I can't put into words what I'm thinking. It seems weird to me to force the user, meaning the client to think of it as I'm creating a resource someplace. I understand technically that's exactly what's happening. All right. That's usually within every implementation. It's going to create some kind of resource where it's a form of resource or just an entry in a database. It's a resource. And I understand that's what's going to happen. But the mental model from the user's point of view, is it. Oh, I'm creating a resource or is it, I'm asking for a subscription. Or am I asking for events? Right. And there's a, I don't know why, but in my head, there's a slightly different way to think about it from the user's point of view. So I don't know why, but in my head, there's a slightly different way to think about it from the user's perspective. And, and I, and I don't know whether we, whether it makes a difference or if it does make a difference, if we decided which way that we're going to make the user think about these things. Okay. I think I get that what you're trying to intend. So you basically imply towards the central subscription broker, what the workload can basically query or request and show its intent in terms of subscribing to a topic. And it definitely gives you the relevant details about how the messaging system can interact with it rather than creating a resource. So it's mainly like requesting a broker. Let's, let's not call it like any specific messaging white broker. It's just like a subscription broker. So yeah, the workload says that, okay, I'm interested in XYZ event and it calls the broker using the broker using this specification. And the broker basically deals with orchestrating the messaging system to this workload. Is that what we're trying to intend out of this? Yeah. And I think also another piece of it is, and this is the one thing that's always really bothered me about Kubernetes is Kubernetes exposes the underlying data model to the user. And I'm not sure I want to do that in this mod in this spec. Right. I think, I think we clearly needed to find the shape of whatever thing gets sent back and forth between the client and the server. I agree. We obviously needed to find that. Otherwise you have zero inner ability. However, I don't want to necessarily force all implementations to then store what's sent on the wire into their back end server. They could choose if they wanted to, but they should also be able to translate it into their own data model. Right. And, and that's something I don't like, because I need to think about all this on my head, but I'm not sure whether that influences our decisions here or not. Right. Because if you think of this as, oh, I'm creating a resource, I think people are going to assume you're defining the data model for the server. Whereas in my mind, RPC is more like, here's what I want. How do you do it on the back end? I don't give a crap. But here's my data. I'm just going to send over a chunk of JSON. And therefore we don't have to fight about what the shape of what's on the wire, because at some point, let me put it this way. At some point, I think Scott is going to ask for a status and a spec section of these resources. I don't know. But it's subscribing to something means you create some state and later have to refer to that state. If you ask again or want to delete that subscription. Yes, yes, but I, everything you said there, I agree with Klaus, but does that mean that we have to actually have a spec in a status section? Simply because some people are going to want to implement this on Kubernetes. Right. I personally wouldn't want that. Unless, unless for some reason, spec and status makes 100% perfect sense to the end user. Right. To me, as an end user, when I look at a subscription API, I would want to be able to pass in, you know, like what I'm showing here on the screen, right? I'm going to pass in. I'm going to pass in some information about the subscription. Right. What are the, what is the filter criteria? What is the bucket I want? If it's a cloud object storage, right? I'm going to pass on this information when I turn around and do a get on something to find out the status of my subscription. Personally, I don't want to see spec versus status. I don't give a crap, but that's something we can still, I mean, we don't have to adhere to those Kubernetes. Yeah. Yeah. I agree. We don't have to. But, but the thing that worries me is when we start talking about creating objects, I think a lot of people whose minds are going to jump to, Oh, we need to decide what the server side implantation is going to look like. Yeah. And even though I started thinking in that direction in the moment, so for example, if we define, let's say the workload says that I want to interact with the Kafka protocol. Now does it has to create a client for Kafka or MQP or any other protocol? Because in my, in my perception, how I see the end result is like the messaging system should be, or there should be a mechanism that the messages are getting dispatched to the client rather than client pulling it. I mean, of course there are two mechanisms in this case. That's up to the, I mean, up to you to choose the appropriate protocol. If you choose a messaging protocol, then subscribing to something pulling it depends, but it's usually the way to go. Well, I don't think we're going to resolve this on this car right now, but I think this, I think the, the short answer on each is, yes, if you want to write a proposal, I think everybody will love that. It's every way you choose to go. Cool. I mean, I can just put on some things which are there in my head. So let's see if we can achieve that. Yeah. To me, forward progress is even just somebody putting out some really weird idea and everybody shoots it down. At least that gets everybody talking so we know what people want to do. Cool. Yeah. I think I can start up. That's a good vacation thing for me. Okay. Cool. Thank you. Anything else you guys want to talk about relative to the interop. So I think for the interop would be good if someone also tried to implement the subscription API. I think that was the intent, right? Because we put it on the list last time, I think, So far all the names are behind this career. I think that someone show up who also said, yeah, I will do the subscription. I don't know if anybody actually volunteered for it. I think it was just sort of implied that it's going to include this part of our work. I mean, count me in. I did reach out to Scott, but I think he was pretty swamped last week. So let's see if I can catch hold of him next week. Okay. That'd be good. Okay. Thank you. Anything else? If not, I'm going to jump over to the K native stream committee call and see how they're going. Oh, okay. All right. That's going to be a fun phone call. All right. Okay. I guess that's it then. Have a good day, everybody. Bye. Bye.