 Hey, Rachel. Hey, Rachel, are you there? Yes, I am here, sorry. Hey, Nathaga, I'm just curious, when I share my screen like this, what do you actually see on your side? Do you see just the window, or is there lots of white space around it? I can never tell how people view it. I see. So it's not as big as a normal full screen. It's about 60% of the screen. There is white space, and I also see the participants window. The which window? The participants window from Zoom. Oh, is that me, is that why? I was going to say, that's weird that you see that one, OK. That might be, yeah, that was me. OK, so is it OK the way it's being displayed right now, or do I need to change something? I think it's fine. I also like it, sometimes when there are comments, I like to be able to see the comments, so this one's good. Yeah, OK, OK, I just wanted to make sure, because a lot of times when I'm watching other people share their screen, especially if they have 4K monitors, I end up seeing a very small little window of the actual content, and then there's a whole bunch of white space around it, and I can't figure out when that happens, yeah. That makes sense. OK, cool, thank you. Hi, Joe. Hi, everyone. Hey, good morning. Hey, good morning. Good morning. Hey, Mark. Hey, Mark, how's it going? Awesome. Nice and dreary if you're in Boston. Are you actually getting snow, or is it just rain? It's just rain. OK, let's see. David, are you there? Yes, I am, thank you. All right, cool. Chris, are you there? Chris Porters? Yes, I am. Cool, whoops, misspelled your name a little bit. Jim Curtis. Hello. Hello, Louie. Louie, are you there? Yeah, I'm here. Hello. Thank you, hello. Thank you to everybody so far. Clemens, you made it off your flight. Yes, I literally just came into the house, and I'm now ready. Excellent. Yes. Yeah, the drive from the airport was more adventurous than the flight, per se. I guess that's good. I'd rather be more excited than a car than a plane. Yeah, I live kind of in one of the more densely populated areas in Germany, and this was right into the rush hour, so. OK. William, are you there? Yep, I'm here. Excellent. And there was someone else out there. There it is, Ryan, are you there? Yes, I'm here. Excellent, thank you. Steve, are you there? I'm here. Hello, thank you. Hello. Yes, I have a cold. Shh, quiet, quiet. I'm on the phone, quiet. Then there are many times. Unfortunately, the dog doesn't tend to understand nearly as well as the kids, so. Dan Barker. Yep, I'm here. Excellent, thank you. I'm joined. I know someone's joined, because I see the list moving. Barbara, are you there? Barbara? What about Thomas? Sorry. Barbara, OK. Thomas, are you there? I'm here. Excellent. Alex, are you there? Alex? Yep, I'm here, thanks. OK, cool, thank you. And there was someone else who popped up on the list. Who was it? Vion. Yep, I'm here. Got it. OK, I think I have everybody so far. Give people another couple of minutes. We may have a smaller crowd than normal today, depending on whether people are at the CF summit or not. So hi, Klaus. Klaus, are you there? Yeah, I'm here. Excellent, thank you. Hey, Rob, are you there? So Rob, if you can hear me when your microphone does get connected, just speak up a little. We've lost Rob. OK, let's give another 30 seconds or so, then we'll get started. Hey, Doug, this is Rob. Hey, hey, Rob. Glad you could make it. OK, is there anybody on the agenda that I missed? I think I have everybody. So, too, why don't we go ahead and get started? Let's go back over here to this page. All right, just a reminder, Kukan coming up. Nothing much there to say other than what we say every week. We've got two sessions there. Let's talk about the planning for the interop event around, I'm sorry, planning for the Kupkan event, especially around the interop demo. Mark, you want to bring people to speed on what happened on Monday's call? Sure thing. We had another subteam breakoff to talk about the interop. There is, I'd say, a smaller team than the previous time. And we mainly focused on what would be possible for the Kupkan timeframe and focused more around possibly what Austin's proposal is around. He just submitted 166, which discusses some of the items that he's going to have in his talk, along with a proposed topic for the demo. So is Austin on? Of course, I don't think he can make it today. Oh, OK. OK. I'd say that we're still grappling with what is achievable by Kupkan, what would make sense and allow for people to understand the interoperability concepts. I believe that Clemens had an action item to be able to provide a function to hook up a vent grid. Yeah, that's true of function. Yeah, that's true. I just have had no time I was traveling last today. So I will have that tomorrow. OK. And in which case, anything that we can do to have producers of cloud events, I think that we have plenty of consumers with serverless.coms, event gateway, dispatch, the work with Huawei, et cetera, that we can consume them. But it's really producing and ensuring that we know what the interoperability is between them that we can actually decode, et cetera. So I don't know that I can talk to Austin's proposal, the demo proposal for e-commerce. We didn't talk about this in the call. To the point that Doug made in the current meeting minutes, he would like for us to meet again 7 AM on Monday. And perhaps we can get enough people to be able to talk through this proposal. Right. So at this point in time, I'd like not to actually discuss Austin's proposal, because I think most people haven't a chance to, or his issue. I haven't a chance to really read it yet. But are people OK with scheduling another KubeCon prep event for Monday 7 AM Pacific? OK, I'm not hearing any objections. We'll go with that. I did add at this topic, though, to the agenda later on. If we do want to, if we have time, and if people want to do a deeper dive into it. But I figured people want to talk about PRs more right now in this call, if everybody's OK with the agenda. OK, are there any questions from Mark relative to what we discussed on Monday's call, his summary, or Austin's issue, just quickly? I will say that we did have meeting minutes that's in the same dock, if anyone cares for more details around what we met on. Yep, thank you. All right, not hearing any. We'll keep moving forward then. Thank you, Mark, very much. So let's get into the PRs. Now this one, while technically we didn't tag it as 0.1, it is more of a syntactical thing, cleanup thing. If we can resolve it quickly, that's great. If it's going to lead to a bigger discussion, I'm going to defer it. But this one, I noticed that we had a TBD in the spec itself, that's right here. I thought we could remove that now, because I think this is already being covered by, in particular, Clemens PR around how we're going to serialize events for HTTP and JSON. And he talks about what to do with the extensions there, in particular, adding the CE-X prefix. So I figured we don't need this TBD anymore. I need serialization. We'll figure out how to do that. So I just propose to remove that bit of text. Are there any questions on that? Merge it. OK, any objections? All right, not hearing any. Cool, thank you guys very much. Now let's get to some more fun stuff. OK, yes. Clemens, your next, hold on a minute. Zoom is getting, it's not pre-having for me. Here we go, hold on a minute. There we go, HTTP transport of the first one. So there was plenty of time, I hope, for people to find this and look at it, because it's going to be very difficult to explain that whole thing in all details on this call. And it should also not be the goal. So what this does is, effectively, creates an HTTP transport binding. I just want to explain the architecture behind this and thinking behind this. These are two documents. This is the HTTP transport binding. What this does is it takes a cloud event and binds it to HTTP. It doesn't, it's not specific about whether it's a post, a put, or a get. And it doesn't talk about status codes at all. All it does is a binding of a cloud event to the HTTP message. With that is also a JSON mapping that represents our cloud event as a JSON object. Based on, I think, your own input on the matter, I made two different mappings that are, one is the, one maps the event into one single JSON object. The other one is maps the metadata into the HTTP headers and then keeps the event payloads independent. So this here is what we're looking at is the JSON mapping. So that's the self-contained JSON cloud event. And here, this also illustrates how the content type functions because that was an objection, I think, from Thomas. It's like, we don't need the content type in here. This actually shows what we need the content type because the content type here is actually, actually declares what the content type of the data is. So I understand what it allows. I disagree that it's useful. OK. So we have a lot of cases where we need to carry an event that is raised by some existing application that encodes its event data in some existing format because that's something that's being distributed to a target system that understands that. So we need to be able to express that in some way. So having an XML payload is legitimate, right? Absolutely. And I didn't mean I shouldn't have interrupted. Let me give you the chance to actually explain the PR as a whole and then we can banter back and forth afterwards. All right. So this is the goal of these two documents. The one is JSON mapping. The other one is just a projection into the HTTP message. Then there is a subsequent PR that then is defines the Webhook protocol. So this lays the groundwork for it and the Webhook protocol. And I don't think we're going to go and get to that. Well, there it is, yes. That is actually now specific to how the event is being delivered. So it defines you have a post. And how you deliver notifications. It talks about authorization. It talks about an abuse protection feature that is leaning on it's an amalgamation of several abuse protection features that are existing today in such platforms. And this thing gets really specific about post and about various status codes that are being returned, et cetera. So those two things compose. The reason why I'm keeping those two separate is that the Webhook specification, I believe, will be very useful to the web community overall, even if they're not using our cloud events format. Because Webhook today is Wild West. And we have a chance here because we have a lot of companies in this forum who have weights. I think we can go and have a canonical definition of what Webhook is. And that's the goal of this. This Webhook specification is something that gets born here. I would love for us, as many as we can, to go and take that actually to IETF and make that a proper RFC. And the nice thing about this design is that it composes nicely, obviously, with the HTTP mapping. So the Webhook spec and our HTTP transport mapping spec could be one spec, theoretically. I just want to keep them separate because I believe that the Webhook spec, per se, is universally useful. So that's why I've designed it that way. The JSON document is separate because there will be the need for other type system mappings. For that reason, I have already filed. And we're going to talk about that in several weeks, I think, the MQP type, the MQP transport mapping. The MQP transport mapping requires, if we want to go and put a Cloud Events metadata into MQP properties, we need to have a type system mapping from Cloud Events to the MQP type system. And so I basically filed this to show that there's a JSON event format and there's an MQP event format and to basically show how those compose. So back to the current PR that we're talking about, this is really the groundwork for all of this. The JSON event format is required for everybody to implement. And the HTTP message format is effectively foundational for being able to send an HTTP request in whatever form you want. So if you want to go and map an HTTP request, but what you choose to do is you choose to implement a system that delivers events by soliciting them using a get, that's fine. So this specific, you're compliant with that spec if the way how you deliver events is having someone pick them up using a get because this defines a mapping that works for a response and you can also go and deliver events with a put if you want to. So I don't want to go and constrain any of that. But then once you layer the web hook spec on top of that, that's exactly when you snap to a common in-rub model. So I want to make it as flexible as possible for people who want to be, you know, do extraordinary things with HTTP using cloud events. And then I want to have one spec, which is very specific for how we push events across platforms. So that's the rationale for this whole thing. OK, so now might be good to dive into specific questions that people had about, in particular, the HTTP transport and JSON mapping, right? Yeah. And there have been some comments, I think. OK, which one would you like to start with first? Actually, let me ask a question. Hi, little first. Can we first focus on the ones that people consider to be showstoppers? In other words, if you've made a comment in there, but it's just a minor thing and it's not necessarily a blocker for getting to 0.1 and we can perhaps fix it in a follow-up PR, let's try to skip those right now and focus just on the ones that people consider to be blockers for merging this PR and getting to 0.1. So which one would you like to focus on first? I have a procedural comment before we get to this. OK, go for it. So the procedural comment is these specific, so all of that work has been around for, so this has been around for what, 14 days. The subsequent specifications have been done, have been also out for a week. I find it deeply problematic when we get comments like the day or the night before this call. I find this a little disrespectful. So I would, and like the first ever voice of commentary happening shortly before the call, we have a deadline that we agreed to for substantial feedback because they need to be addressed. That's Tuesdays and the day. And so I would appreciate if people would stick to that. So that's the preface and now we can get into the details. OK, so let's focus on the specific questions or concerns. Would you like to mention something or Thomas, do you want to jump to yours? How do you guys want to work this? Yeah, people need to, I mean, I've addressed everything that I could that I've seen until I boarded the plane today. So Thomas, let's go to the one that you were talking about earlier. Where was that? I can start with the last controversial one. I wanted to just expand the rules on where we use percent encoding because none of the examples actually percent encode slashes. What line number should I scroll to? Let me double check that. Oh, here it is, I think. Let's go with per ASCII, USASCII. I think it's a line 201. Yeah. So I just wanted to expand the set of non-principle ASCII characters, non-ASCII characters and percents themselves must be first encoded. That's the only way to make sure that this thing is an actual reversible encoding. Yeah, I need to point to, I need to do a better job here and point to 3986 probably because I actually don't want to define the percent encoding here because I think percent encoding per se in 3986 is actually specifying that rule. Because what I want to avoid is us requiring a special implementation. So I want to point to prior arts and the prior art should do the work. I want to leverage, you should be able to call the function that's called percent encoding that's actually implementing the standard and then you should be able to use that. So I'm just not pointing to it, right? And I think, so I was quoting effectively that rule more or less from the URI spec. And so I should do, I need to do a bit of a better job in pointing to it. And that's something that I already noted. So Thomas, you okay with if you just tweak this to do more of a pointer to that spec rather than trying to repeat it? Sure. My requirement would be that a spec compliant decoder must be able to correctly handle some of the ass cases I gave. So because unfortunately, I end up having to deal with strings that have some magically different slashes and percent 2Fs. And I need to make sure that they actually will decode correctly into the original string. I'm going to take care to point to the right place in the specification that refers to the implementations that you will be using. Because if we're using, none of us is writing a new percent encoder code. We're all using stuff that's in the frameworks. I just need to go and find the point to the right place that all the reference, the all frameworks point to. So let me just fix that. Okay. Okay. There's two more, I think there's two more comments also from Doug that I haven't addressed yet that I also need to still work in. Or no, that was the other spec. Sorry. Go ahead. No, here. Yes, here. The next one is from Doug. Go ahead. Yeah, go ahead. And this goes back to the fact that I fully accept that I have been speaking past people a number of times with URI constraints requirements. Like a my goal would be that there is some way to know a context or interpretation of the URI. When I misspoke last week about URIs allowing no scheme, I was talking about URI references. I I was thrown off by these examples. I feel like if slashes like slash leading URI references without an authority, without a scheme are okay. That might need to be explicit and maybe we need to backport that to the original spec and call it a URI reference, not a URI. Yeah, see most places where people talk about URIs practically relative to URIs are like specifically in these documents are relative to URIs are also permitted. So I'm like I felt like I was treated like I was crazy for referencing a relative URI reference. And so I just Well, let me ask this. So the spec right now just says URI and I believe what Clemens has here in the mapping is a correct interpretation of that or correct example of that. It sounds like perhaps Thomas what you want to do is open up a separate issue to go back to the spec and say should we constrain it to be smaller than just a URI? Click on the link for URI. It requires a scheme. If you click on the link for URI reference it's a subtype of that which allows a schemeless version. Oh, I see. He is correct. And the problem with that is that without reference it assumes that the scheme is relative to the context in which a contained document like a hyperlink and an HTML document. The problem is if this event is without context it has no scheme to be relative to. I can always make one up, right? Yeah, I am personally fine. My use cases are easier off if relative URI references are allowed because it's how we reference things in Google but I just I feel like I've been gaslit a number of times in these meetings and I want to at least ground in technically correct descriptions. Right, so it seems to me at this point since this is about This is not about this, right? This is not about changing the spec at this point in time so Clement it sounds like perhaps you should fix this example to have the full URL. Yeah, I can certainly do that. I will prefix that with a scheme and just make that clearer. That's all it requires is a made up prefix, absolutely. Yeah, I'm going to make this a URI. That's the critical thing because if I now take a transport URI then all hell breaks loose. I don't want to imply this to be so if I use this is the jail that the XML people broke into when they did the namespace thing that they started using HTTP URIs for everything and then everybody believed that it was a different thing. I am totally on board with a subsequent PR even just saying we will amend the spec to allow URI references and then simplifying these examples again. That was my life too. I think URI references from my perspective URI references will be fine and I think having these to be full URIs will be more powerful and you always have that option because what we value is really the structure of the URI and having a base specification for what that structure needs to be and if you don't really care about the host name and if you don't care about them being transport then it's not clear that you really need to have them to be fully qualified. Would someone like to take the action item to open up a follow on PR? I can do that. Okay, thank you Thomas. And to be fair like this is where I kind of wish that we would require that a relative URI actually has an authority. I don't know how much that resonates with people or what's we wouldn't want that. No, we don't want that. What the CADF spec is it's all about being able to make sure that the person who created the scheme is able to make it a unique and improve it Either a scheme or an authority would be well, so perhaps we could save that for a different discussion and focus more on this PR right now. So what's the next issue in here you want to scroll to? I think we can finally get to so I absolutely understand that we can create a demo sample that has mixed content type encoding I think that the idea of the binary encoding versus the structure encoding is very elegant and I think that when you use the binary encoding you're free to use any content type you want you don't have to invent anything new, use the HTTP content type header structured encoding I would person prefer we just say hey you use the same encoding for your structure in your data and so if you want an XML event then you either use the binary encoding or you help push through a way that the entire envelope can be represented in XML so where should I scroll to Thomas in the PR if you're looking at the same thing I don't think this is the right section so I can give you feedback on that one so the structured encoding has and I'm actually mentioning that in the documents by where I'm testing them the binary encoding is really there for efficiency and it's really meant for cases where you care about where you care about encoding a binary as a binary and that's a high order bit and then you want to go and put the rest into the transport frame and that's for cases where and we had these IOT use cases mentioned where you really care about footprint and you only want to have a single implementation and you don't want to go and also employ your adjacent encoder in the whole business necessarily for everything and where the goal is really to push some existing arcane weird event data that comes out of an existing device and just push it over to the other side but in a standardized way using some common infrastructure the reason for the structured is that it's actually routable so you can go and push that and this is practice in our systems so an event shows up in event grid it flows through event grid it actually gets handed off to an event hub from an event hub it goes through a complex event processing pipeline potentially and then gets archived so there's like four different hops and the nice thing about that is JSON is that it's completely routable because everything you have about that event is in there but in that model the event payload might quite well be XML and that's perfectly legit the real event is XML and in our cloud events format the JSON is really just the envelope so we should and I think there's just legitimate use cases for it and ours this routing use case is motivating that that the event data, the core event data that's produced by your application is in a text format or a binary format that you have and if it's in a binary format that's not representable by text you put it in base 64 and if it's text format you put it into text and then you route that entire block of JSON so that entire JSON object through all those various steps and if you were using this binary format for that obviously then you need to carry all these transport headers out of bands so the binary format is really just meant for single hop and the structured format is really meant for multiple hop routing of that exact same event Thomas is that happening? I don't know I guess in some sense I recognize that we may go through some untrusted or not untrusted but I guess non-compliant proxy I guess I don't know if I really I'm trying to figure out how much I buy the idea that this format that is that nicely separates data in context that doesn't invent new double encoding formats why it is necessary yet not good for everything I honestly expected that this spec was going to end up with one encoding format that looked a lot like the binary encoding format yeah but there's those two cases so if I would do this kind of just as a product spec the binary model would probably not exist because I need to have the data routable so that's the thing I'm trying to grasp is why is the data less routable because if you have HTTP headers you have an HTTP header that comes in on one side and you have a payload and now you need to go and route that off to well some other place that's using MQP that's using MQTT then all of a sudden now you're doing transfer protocol mapping transfer protocol header mapping and the reality of that is that you will have a gateway that does your external protocol handling and now you're kind of dragging transport context through your entire implementation so you can go and spit out a transport something transport compliant with headers on the other side so that's not very pragmatic so I'm trying to weigh the relative cost of making sure that HTTP headers are forwarded versus worrying about the future where there's more than one so let me tell you this is not a theory we're actually having that in production just with a different format with our own format and this is exactly what we're doing we with events come in they get pushed through event grid they land in event hub they then land in sometimes in ever containers get picked up by Hadoop and then get processed I mean this is something that just happens those events even go to disk and then get re-hybernated from this so I need to have them together in one place and the payload inside of them maybe XML I get that you need to have them all in one place my fear is that we're eventually going to ossify that JSON is the best transport because the elegance of the HTTP header solution is that regardless of what encoding anyone wants to choose for the overall envelope that they are going to have one way that they know how to access and route basement a particular feature if they do HTTP but if you do something else then you need to have if you need to have routing and you go to a different protocol then you don't have that then you have an issue because now you don't have the data together we suspect that there will be a way of showing this context or metadata in any transport yeah but the point is in the reality of an implementation is if you don't keep the data together then you have simply proprietary framework over here context and proprietary framework over here context that you now need to go and map to each other instead of just putting it all into a single JSON object and then you're happy with it and it's not precluded that you carry arbitrary data you just need to base 64 encode it that's the only cost because content type basically says what the data field contains and if it's not a text format then it is binary and then it's base 64 encoded so you can tell I don't see how we're going to stop speaking past each other so I'm not sure what the all right so our position is that we believe to carry arbitrary payloads as strings and a binary and to be able to carry them also in a JSON envelope and that's the one that we have here so let me let me jump in I want to make sure I understand something because I think something you said there at the end Clemens is the thing that I think resonates with me but I want to make sure I have it right and that's if the data it's talking about the JSON encoding version if the data itself is encoded in some way let's say base 64 encoded as the receiver of that I don't know how to decode that into something else like XML or something that I just don't know how to decode it or what to try to decode it into without the additional content type property sitting right next to it that is correct that is absolutely correct does that make any sense Dumas or if you don't think the kind of type property is necessary to do the decoding how would I as a receiver know how to decode that binary data without knowing what to encode it or what to decode it into so I'm saying that the binary spec doesn't introduce this problem correct yes well but what it does is it takes the exact content type field the content type field for the binary spec originates in our cloud events specification it is the same field that I'm mapping into the JSON is the same field that I'm mapping onto an HTTP header in the binary version it is the same field it's just mapped differently and the spec actually says that so yes it doesn't introduce that problem because it's specifically designed so that the content type field out of the abstract info set for the cloud event maps to the real content type field in the binary case and maps to the content type field inside of the JSON packet in the other case but in both cases that describes the body and the body in one case is contained in the data field in JSON and in the other case it's in the in the body section of HTTP okay I disagree but I don't need to be convinced or right or whatever it's this is not a we don't require unanimity well okay let me ask this question then is this something that you feel has to be resolved before we get to 0.1 no are there other topics or questions about this PR from anybody not just Thomas but anybody on the call who they'd like to discuss because they feel like it's a blocker to accept this PR for 0.1 going once okay let me ask the question slightly differently then is there any objection to accepting the PR as it is right now with the assumption of course that follow-on PRs can always come later to tweak things everything sorry I did actually have one I was actually not overjoyed with making up the new content type for application cloud event plus encoding largely because it just breaks off the shelf headers or off the shelf web frameworks so for example we'll have to come up with a new like Google Cloud Functions will break I suspect something like Lambda would break they'd all have to learn how learn about our new content type and that oh it's actually JSON well we but that's what the extension that I wrote so the plus JSON extension or the accessibility of media types is something that's an RFC and it's pretty widely used already and what we're defining a media type I mean that's what we do here right so why wouldn't we not declare a media type because we actually have the exact case here where we we're defining a media type and then we have multiple renderings for that media type so the exact case for which the the application that I've been referencing now I need to go and find the reference at the bottom of the document because I don't have them all in my head and that is the RFC 6839 that is basically defines the additional media type structure syntax suffix and the plus JSON and it's really simple to fish this out because you can do plus JSON and you can teach all web frameworks that I know you can teach a mapping for what a content type means because it's a common it's commonplace that that you have media types that are expressed in JSON that's not a it's not an unusual thing I would want it to be I'm just trying to express my point that this will break known software this will break known services I understand that they have not implemented every spec under the sun and that they can be improved in a spec compliant way let's let's do this let's try so I want I want I want to have a registration for our own media type which because it also makes the standard legit and if we find out that the media type turns out to be a real blocker and that's something we'll find out the interrupt testing then we should go and figure out what the rule ought to be and whether we need to go and revert it but I think it should I'm not I would be surprised if it really broke a lot of stuff because that's how media how media types work is that you if you define a format of that sort then you are introducing media type for it so I would like I would like to make this make this kind of if we if our testing proves that we're we're causing pain with that then we should go and revert it I can abstain if we just create a bug to track it and try to set up our rule on what we think is too much breakage before we actually do the experiment I'd just like to be scientific about this it's okay but take you with an open issue to track this yeah cool thank you very much yeah I would just like some other voices about before we actually get the measurements I'd like to predict what would be an acceptable amount of breakage and what wouldn't okay I'm not having a media type it's a little weird okay I'll fill that in later okay anything else Thomas okay anybody else on the call have any other issues they'd like to bring up relative to the PR okay are there any objections to accepting the PR as is again but the assumption that further on PRs can come in and change things in any of our documents not hearing any objections right back over here I have two there's two comments that you made Doug where you were you were asking for a must rule and I'm still gonna add those so if you if you want to merge this at the end of the day it's it's two little things I'm gonna okay so tell you what I don't want to hide things so let's find those two spots make sure everybody's okay with that because otherwise I was gonna ask for a follow on PR so it's fine where were they looking at them in the JSON spec I think okay here's the first one yeah you say it's right here everybody take a look at this sentence we just have the word must become members of JSON object and I don't know and I think the other one is right above that yeah any questions on this one before I move on okay and where's the other one oh I know I have to blow that alright but it was it was very it was similar oh it was just the only one maybe I think that's it oh here we go so he says this kind of type better must be set to the media type and I think it needs to be must be set that's so those are the two things that I'm still gonna go in and okay are there any concerns with those two changes so I think the intent was for that to be required he just didn't actually have the normative order that was the intent for both of them okay not hearing any objections we'll approve this with those two minor changes okay type systems the next PR for 0.1 wanna quickly talk to this one yeah so the reason why I even needed the type system is because I broke out the JSON sterilization and also the MQP sterilization kind of as a proof point and then realized as I was writing the JSON specification that I really had no no types to refer to we had used types or were using types in the in the document but we hadn't really said what they are so this is basically just summarizing what they are and the yeah well that's what it does so basically just move some of the definitions kind of inline definitions that we had with the property types moves them up here and then consolidates that and then down below it just actually uses all the various types I don't think there's any other changes that it does and we had it and then the only change was there's object the object type is newly introduced and that's really meaning to be we have a bit of a discussion in here we have a the object type is in Java script is different because Java doesn't have an any type in Java and in C sharp and F sharp and in Python whatever it is really the can be anything object and that's the and that's the meaning of that and it's really just trying to be a variant but if I call this variance then it becomes weird in more places so for the kind of the main line uses of what object is in most languages and being a variant type and that's really what that means to be and it's really just meant to be a an abstract type system where I need to have one word that then stands for either a string or a map or a binary that's really what that what that is for so I don't want to make don't want to be more scientific about the same thing with with map that is a list of is a list of things and I think if you only need to have a map of strings you can you can write this I don't want to go and introduce kind of a templating mechanism just for the for that purpose so I want to keep this super simple I just want some clarity on what I can refer to from the mapping specs I don't think Sarah could make the call so Thomas or Rachel do you guys want to talk about this one or try to represent Sarah's comments like actually Thomas you had a comment in there too you want to talk to this one I mean if I'm going to try to guess for her I agree that variant is something that like I get as a former OS developer and would probably be scary to other people yes I do kind of prefer any I know it's not the objects it's not the JSON spec name but it is very very clear what it means compared to objects where most people think oh JavaScript object I this is not the hill I'm going to die on and I've taken up enough time today well okay so so Clemens just out of curiosity do you have a obviously prefer object but is any something you could live with or do you stick with object right now and deal with this later we can have a debate about this later we can go and probably do an edit on it I don't care I don't care as much about names as it seems so in the end this can be any I think it's just this with that you pick up C sharp in JavaScript developers and a bunch of other people easier then if you're introducing something that's kind of artificial and doesn't show up anywhere but in TypeScript you're saying keeping it as object picks up more people I think keeping it as object picks up more people that's my that's my feeling I'm just wearing my education hat I would call it a var if you wanted to pick up the JavaScript people because I said it is actually a name collision with JavaScript they'll be familiar with the name but they'll interpret wrong let's I would propose let's merge it because because and then let's follow an issue on it I'm not wedded to the name it's just a discussion that we can have separately it's just that we had this conversation last week as well and we object and it hasn't changed no I actually as you can see two days ago I made a comment on this which is where I'm rationalizing what stays as object so I agree it hasn't changed we talked about last week I like that well okay well okay that's what I was going to try to get to our stuff that I was wondering about is I'm not hearing a proposal that everybody can live with yet so we're down to basically accepting this as it is or potentially accepting it as it is is it suggested right there don't you suggest any no we can't merge we can't merge the JSON mapping without having this so let me ask this question is this something that we can live with for right now and open up another issue to track this to see if there's a better word other than object because I'm not hearing anybody say they're against switching from object we just couldn't find the right word yet so I thought everyone was against switching from object and we just need to find the right word to do that but the problem is we haven't been able to find that right word yet and I'm not sure anybody wants to hold up 0.1 to find that right word that's my stance yes so that's why I'm proposing to make some forward progress of let it all go in open up an issue to say we need to revisit the word object there because I'm not sure that should be a showstopper for 0.1 at least that's my opinion but I want to hear what other people think because it's literally just an abstract thing that it creates references between two specs nobody's going to write code that uses that it's literally just a spec construct so there's one thing I want to get to today but on this one is that a path people can live with I'm not saying it's perfect is that something you can live with for right now except objects right now and open up another issue I get to Clemens to revisit this yeah I'll be happy to do that is there any concern or objections to that okay are there other issues on this PR that people would like to bring up let me ask them more formally is there any objection to accepting this PR as it currently stands with the assumption that we'll have that AI to revisit object okay thank you now Thomas you wanted to talk about this issue hopefully we can get into it quickly I was just curious I was under the impression that 0.1 was something that I was under the impression that last week we had made a comment that at 0.1 things will start to solidify in the core spec in which case I was not necessarily sure I was comfortable with some of the things that we've not really talked about how they're going to be used they just kind of were grandfathered into this spec and I just either getting clarity that these will not be final and they're still just as volatile as before or saying hey they haven't actually gone through scrutiny yet let's cut them until we actually have the ability to scrutinize them so my assumption and people please correct me if you have a different opinion my assumption is that the tagging this is 0.1 has no meaning relative to the permanency of anything in any of our documents everything's still changeable and there is no guarantee of backwards compatibility that's correct in my opinion this is simply to have something that people can point to as they code up for the interop events that we're hoping to do at Kubegon and we can all be looking at the same document and not looking at something that's changing yes that's the exact intent we need to have a specification version that we can all point to and use and all the fields that you're pointing out Thomas are all optional so if you don't like them you don't need to use them and I think I can see that schema URL is going to drop off I'm not sure how many people are going to use that in the end we're just we're just basically putting a label so that everybody can look at the same thing and then we just keep going Thomas does that alleviate any of your concern yeah I mean if we can just close it with documenting that we all agree that there is no implied stability reverse compatibility then I think that that captures the concern very well okay now is that a comment you'd like to see just in your issue or is that something you'd like to see in the spec the issue is fine I can take that action on them okay in that case I believe with that we've resolved all the open issues and poor requests that are tagged at 0.1 so let me ask this question are there other issues or PRs that I forgot or that people know about that we should be looking at before we think about tagging this as 0.1 okay is there any objection to tagging the version of the spec with these two PRs merged as 0.1 then wait a little longer I don't want people to necessarily feel rushed but we keep any in mind that we do need people to start coding stuff up for the interrupt event coming up any concerns or objections to tagging the merged PRs as 0.1 okay done thank you very much actually sorry approved with approved PRs okay now I did have a question offline about the Monday meeting that we agreed to have at 7am Rob Dolan was wondering whether we could possibly push it out one hour to 8am because there are some folks on his side who may not be able to make it what do people think is there any objection is there any objection to move it to 8am pacific instead of 7am pacific I'd like to get the acknowledgement from Austin as to which time he can attend because of his PR okay I can take the AI to send him a note and then can if you can send him a note and then just broadcast out onto both Slack and email which time was agreed upon okay so let me make it a little more formal because I do agree that Austin's presence is kind of critical since he's the one driving at least that scenario and it's his issue is there any objection go ahead Rob I was just going to say if Austin can't do 8am then please keep it at 7am he's definitely more important than I am yeah well that's what I was going to suggest was what if we tentatively go for 8am I'll take the action item to reach out to Austin and if he cannot make 8am but he can make 7am then we'll switch it but if he's okay with 8am then we'll stick with 8am so basically we're letting Austin decide 7 or 8am basically does that sound fair to everybody yes okay so tentatively 8am can we use USB-C? Tending Austin's availability okay um I don't think we have time we only have three minutes left so I don't think we have time to dive into anything deeper so let me just do one quick thing um can I get to hear you Eric Erickson are you there and David Lyle David David states in chat that he doesn't have a mic oh good enough okay he's alive enough to hear me that's good enough okay so we have a whole two minutes left is there any topic anybody would like to bring up at all that we can cover in two minutes or at least start to cover in two minutes can you post the time the decision on the time for Monday's meeting it's 8am Monday pending Austin's availability okay thank you and I'll try to send out a note about that too hold on a minute okay I'll do that okay any other topics you should expect my the two changes that I'm going to make to that spec within the next two or three hours you get the notifications and everything so thank you very much as a small question assuming that we were going to use URI references anyways do we mind just assuming that that will be part of ODE1 tell you what what if you open up a PR right now Thomas to make that change and if people can LGTM it offline I will wait until tomorrow to create 0.1 and if I get enough LGTMs we can merge that PR I have a meeting for the next hour and a half but I'll do it right after that let me ask you is that okay with people I know it's very very rushed and I'll try to send out a note to warm people but is that okay with people that sounds good okay I'll try to make a note of that here somewhere in the minutes alright with that I believe we're done alright cool thank you guys very much thank you everyone okay bye guys thank you fantastic