 All right. Three out to the hour. Let's get the funds started. 14 people. All right. Let's see. AI skip. Okay. Anything from the community. Anybody want to bring up. All right. STK stuff. We did not have a phone call last week. So I don't think anything to talk about. However. We do have a phone call scheduled for today. I don't know if we have any topics to bring up, but we do have one scheduled if you guys want to join that. Incubator next week. I believe Tuesday. We'll be doing the presentation for incubator status for the project. Still looking for more end users. If you guys want to add anything to the list, otherwise I believe we're ready to go. If you haven't looked at it yet, proposal is here. Have any edits? Just let me know. And I'll try to make them. A coupon. We have the two sessions. We did two big sessions. One for cloud events. One for serverless. One for serverless. And I have the outline of what we're going to talk about there. Feel free to make any edits you want. Ask to your name somewhere around there. If you want to join in. The presentation side of things. I don't believe anybody's done so yet. But we still have time. So be thinking about that. Also. Chris was on last week. We didn't really say anything, but he added this to the agenda. And that's. It looks like you will have a serverless. We'll have a serverless. We'll have a serverless. We'll have a serverless. We'll have a serverless. We'll have a serverless. We'll have a serverless. We'll have a serverless. We'll have a serverless. We'll have a serverless. We'll have a serverless. We'll have a serverless. We'll have a serverless. We'll have a serverless. It looks like you will have a serverless practitioner summit. The same way we did in Barcelona, I believe. Earlier this year. It is planning on being a one day event all day co-located. As you can see on Monday, November 18. The CFP for that is supposed to go out some point this week. I don't know for sure when, but that's the rumor. About this week. I believe they're planning on making it just one long track. So they're not going to have breakouts. So now we'll have a little break out for this CFP. You guys want to submit something. All right. Any other topics you want to bring up before we jump into PRs. All right. Let's get right to it then. Extensions here are the first. Let's talk about this one. Okay, this one I thought might be easy. Klaus is not on. So basically in this one, bit because that implies um yeah this implies a multi-level extension and he thought that was misleading because we don't allow maps or anything like that anymore so we just basically removed this sentence it seemed like a very safe change to me anybody have any questions or concerns about that okay any objections to approving okay excellent thank you all right fix up jason mapping is next um trying to remember evan it's still not on all right let's see what changed in last time since this one is anybody remember what changed i think there might have been i think i think the last set of changes he made in here related to data content and coding being removed i think that was the biggest stuff so here we have this section here and i think the rest of it are just minor little nets like adding boolean i think that's it so i think the biggest bulk of the change is this stuff right in here i'll give you guys a second just to reread that in case you haven't just to read it all right any questions or comments or anybody want to say anything about that scott is there anything you want to mention on here since this was evans good to me okay anybody else want to okay anybody else want to comment question all right any objection to approving it here's my mouse any objection easy peasy cool all right avro um let's see let's get some really good stuff in here all right so someone else want to talk to this one maybe scott or clemens since i know zero about this other than see a lot of changes go flying by so so the um um this is doing effectively the same updates and the reason why we had a discussion about this is that um evan went and made data um binary only um so this this schema that we have here this this this original schema was effectively just driving the avro serializer so it can go and take an arbitrary nested set of records and um and serialize them so it wasn't really meant to represent the the cloud event schema but really just drive the serializer and so um and so uh evan updated that to include the new types that we have um and then the discussion was whether data should be binary only um or should still be be able to contain structured data and um so meanwhile he's made that amendment that data can now just like with jason contain structured information and um so with that we're now even so this is good i i mean i haven't tested so since this is this is quasi code that um the avro serializer needs to work with um i i will trust that that has been tested um but this this looks in in from a theoretical perspective this looks right okay that that's the biggest thing i want to make sure that because i know you have some concerns about the previous version of this and i want to make sure that those your concerns were addressed okay um let me just see i think most of this stuff is minor is this an example no okay yeah so here's the actual schema i think this is a duplicate of what he had in this spec and just to be clear this text in here looks right to everybody yeah okay anybody else have any questions comments concerns oh gem your hands up yeah just a quick one maybe clements um in the jason binding you you'd sort of it or transport you'd extended that to cover explicit you know base 64 versus data um do you think we need to follow the same model here you could embed you could embed either in this structure as well yeah so the special problem with jason is that it can natively represent binary well avro can oh okay all right so so that's true for that's true for mkp um where so in aim so we have this split in mkp as well that's similar to jason but we have that because of the the typical usage usage pattern in mkp typically in mkp you transport pure binary data in in data and then if you have structured information you put that into value so mkp kind of distinguishes between the two for jason we have that split because jason has no way of representing binary data at all so you have to go and put that into some string encoding so we're signaling that here and if you look at that text that we have here that basically says um if it's binary represented as binary and otherwise you know construct a um construct a union which effectively use use the the native data types the way how this turns out is that specifically for avro is that whatever you have in your hands as as data well if that is is binary or if that's structured you basically assign to the to the data property and then have to serialize or deal with it okay so go that um so can the data be another avro object or does it have to be serialized avro object it can be no no it can be so with with the change that we have here it can be um any effectively you you can you can stick any object graph that the that your avro serializer understands you can put that in here and that will be just serialized out as as a data structure that's what that recursive schema does okay cool all right so this would be you know uh and i guess i had some offline chats with scott last week around proto this would be the a model we could replicate into proto structures as well then when we redo proto buff yeah correct that's that's what that's what the protobuf schema should do i don't know how flexible for protobuf is for these sort of recursive quasi-tagged encodings but that's that's what the goal that's what the goal ought to be cool we can use i think there's smoke and mirrors we can use there scott can put me right but yeah okay all right cool thank you dem and thank you comments for answering that any other questions comments okay any objection to approving that cool easy all right next um i wasn't quite sure which one to tackle next since this one may take a while um let's first at least bring up evans pr maybe scott you can talk to this one explain why he's suggesting that we remove it at least temporarily yeah we've changed the how data works and i think the current the current proto only understands extensions as they were and we've changed a couple of those formats and nobody on our side has bandwidth to fix up the proto definition so rather than rather than lock in a v1 version of proto that we would have to support grudgingly for the rest of time we're we're opting to just delete this and then add it when it's uh when somebody has bandwidth okay um anybody have any questions about that if we can if we don't have bandwidth to fix it that's the right way to do it yeah okay and since and since we since we have a modular structure and effectively all the we can add bindings to both transports and encodings practically at any time i don't think that's that's too terrible we can basically we can add a protobuf and a seabor and whatever encoding as a 1.0 version um basically at any point in the future yep okay anybody else have any oh sorry mark your hands up and we feel that this will not get to 1.0 by not having this anybody want to come on on that does anybody I agree with clients that you know we can we can always add it later as an addendum of you know this is a 1.0 spec for protobuf but I just want to make sure that we're all clear that we can go out with without a protobuf format right does anybody view this view protobuf as a requirement for 1.0 okay not hearing it ship it okay so technically hey oh don't you're right yes yes ship it to don't yeah so technically he opened this up yesterday so for our rules we can't technically approve it however what I'd like to do is suggest this because this is a very easy change it may be controversial to somebody but in relative to approval it's either a very binary yes or no to kind of decision what I'd like to do is this is suggest that we conditionally approve the concept of removing it I'll work with Evan to get the PR fixed up because it's not actually right the way it is correct I'm sorry it's not correct the way it is now for example we didn't update the TOC and stuff for them the main read me and stuff like that but obviously those are minor typographical things what I'd like to do is conditionally approve it now and give people offline until the end of tomorrow to raise an objection and if no one objects by the end of tomorrow that would have been enough of the grace period to then say okay we can go ahead and make it make it happen offline does anybody have any objection to doing that that way we don't have to wait a whole another week just to remove it okay so approve conditionally wait until end of Friday oops okay any objection to that all right cool thank you guys um okay quick question here this issue was opened up by Alan um I don't necessarily want to discuss it right now let's see how time later I just have a question for you guys who actually have looked at this one is this a 1.0 issue is my only question we removed the data content encoding thing correct and then he goes down to here and raises some concerns in particular his stuff about the compression is the one that really got me interested so I took a guess at an answer yeah we we we chose to effectively chose to care about basics the basics before problem in the first place and we have and we have basically punted on the the so we didn't really have we didn't really have a good good source for source of reference for you know compression and all extra features um so we I think we punted on that in in in the discussion we can we can certainly it's it's arguable you can go and put that into the content type effectively you can go and make a parameter on the content type that says this is jason but this is this is compressed or um so so what ellen what ellen is I think really menting is the fact that we don't have that we now gave away a place where you can add arbitrary hints for what further encoding has been done on the data we carry including compression yep but I think that's something that I would punt certainly for 1.0 because I don't think it's strictly necessary because most of the transports do have that feature and that and because we also support binary formats now like avro which actually inherently support compression so it's not clear to me that we that that's a higher order bit and that's something that we can also effectively add as a extension if we wanted to or if that becomes a pressure pressing issue we can go in and I think we can add that fairly easily so okay so just make sure I understand your assertion is that if we decide to add some other property that implied compression or any other kind of encoding we could easily support that because we have a clear path right now for supporting binary data yeah exactly so it ultimately becomes a question of how do you there's a buy so there's binary data that we can clearly support and now the question is how do you describe that binary data and what you know what that binary data is and I think with the content type we already have a weapon to go and very clearly declare what's in there without having to resort to an extra encoding flag okay what other people think does anybody disagree with what Clemens said does anybody else think we this actually is a 1.0 issue we need to resolve nothing okay I'm going to assume silent is is consent um so what I'll do is I'll tag this as I'll tag this as nice to have if we can get it done in 1.0 but it doesn't sound like it's a requirement does that sound fair yes okay any other comments or questions on that one okay um again before we get back to Evans because that might be kind of big this one I just opened yesterday so we can't necessarily merge it I might ask for this one to be merged or put into the other category way until tomorrow but this is strictly syntactyl fixes so it's not actually major I just noticed that when we got rid of structures we had a couple examples that still use structures so I just converted those to beaches straight out integers and per Vlad suggestion instead of making an extension to I made it other value just to make it clear that you can put other basically words in there without dashes or anything like that and then the other thing I did is for a completely unrelated reason I was looking for the text that we had around naming in particular the valid characters so I was looking for this section right here and it was really odd to me that it appeared sort of in the term next to the terminology section so all I did was move it down into the into the content attribute section so I put it in there right before the type system it just seemed like that is a more appropriate spot since we're actually describing the names of the context attributes so I've seen like a good spot to play it the other changes in here was oh there's one of the chains Evan noticed that we had more than one data header so the hashtag data was not clear which one I was going to resolve to so what I ended up doing was make the definition of the word data to be event data and there was another thing I did oh because here's that chain right there event data I then in the in that section itself though I got rid of the nested data tag or data header it seemed like it was kind of pointless having it there because all we had was a small little section afterwards and in terms of what it was actually called we actually have it mentioned right here if someone can come up with a better word than just data for the header there I don't want to put it back in I just couldn't think of one last night so I decided to remove it anyway nothing normative in terms of changes strictly syntactical just moving things around slightly any good okay any thank you Clemens any questions or comments concerns any objections to approving and I'll give people until tomorrow to do a final check before I actually merge it okay thank you cool all right now uh this one okay so refresh everybody's actually Scott I'm gonna force you to talk I'm doing I'm tired of doing all talking you want to refresh everybody's memory about what this issue is about uh booting up hold on okay okay so this is this is talking about how certain certain hops among especially HTTP assume that certain headers are called a certain thing for example tracing and the various types of tracing that you might want to do for just normal HTTP requests the the way the the spec is currently written if a producer adds an extension they have no they don't have the ability for downstream converters to like so like middleware pieces to turn it back into the header that's required for HTTP if it's been translated into a different transport and then back so potentially the worst case you drop them and best case they become CE dash prefixed which doesn't work so yes we have one we have a an extension that's supposed to be followed by all receivers of cloud events but what what do we do in the case of just some random extension like or the next version of the tracing header or if the tracing extension adds a new header we would have to change our specification so we think this is a problem okay uh gem your hands up so I think you know I believe it was Clemens that was touched on this last week I think this is unfortunately an edge case for this example isn't it because it's in a pure CE model extensions will flow yeah without any problem because of the way they get prefixed in the transport bindings it's only distributed tracing where we have a problem because it's really trying to say oh well don't treat this the way you do everything else do it a special case so I agree this could crop up with other specs but I'm not quite sure how you can account for it I don't think we can account for every flavor of what might happen in the future if you start with a cloud event you should be okay yeah stuff will always flow across intermediaries it's only this sort of somewhat I don't want to use the word ad hoc but this sort of by the way please do this thing differently that's tripping us up in this situation okay anybody else want to comment okay I'll jump in then I have a comment so I that last bit that what jem said there I think is the key thing for me it's it's the fact that we allow extensions to not follow the normal rules is actually the problem and I'd like to look at it I'd like to tackle that problem itself because I think if we can make extensions follow the rules like everybody else then this problem goes away because it's very clear where the cloud event properties live right they all either in the jason body or there's a chief headers and their prefix with CE and so what I'd like to do is something this is tackling this problem this way and this was actually talked about on an offline phone call that Evan Scott and Clemens that I had earlier in the week trying to hash through this was to basically say all extensions must be serialized with the CE prefix just like any other attribute from that perspective they're no different whatsoever however they can have what I call a secondary serialization so for example in the trace example you could still use the W3C trace headers as as you know however you want that's fine however they now become sort of secondary bits of information meaning when the receiver gets the cloud event he's only responsible for or required to actually look at the prefixed headers the CE ones if he wants to look at the other ones he's free to pick those up and pass them along as sort of extra other attributes I'm sorry or she they sorry they they're free to pick up those other attributes but they're not quote cloud event attributes they're just extra bits of metadata that happen to pass along and the reason I say that is because technically that other bit of metadata could have been changed somewhere along the process we have no control over that whatsoever and in particular in the tracing stuff as as I've been explained as it's been explained to me the tracing header actually supposed to get modified to actually grow over time to add more tracing IDs in there I believe so you actually can very well in the tracing case get different values there so you actually might actually want the two different bits of information one being what the original sender meant to include as quote cloud event data versus transport level data that might get more munged along the way right and so this allows basically for both cases to happen but in particular for a receiver who has no clue about this extension whatsoever they still have a very clear rule to follow they pick up the CE prefixed ones and that's all they need to worry about anything else can technically be dropped from a cloud event perspective anyway that's my proposal here obviously since I just pasted this last night we couldn't approve it but I'm curious to know what people think about that general direction and gems your hand up is that new yeah so I would I would be behind that statement I guess does this mean that there's guidance for SDK writers in that they should be providing up to the application not just the cloud event itself but also any transport context um that then gives I think Scott a pathway to if he brings in something over HTTP that has w3c tracing attached but is not in the cloud event it's not also tagged with a cloud event prefix that you still have some mechanism to present that information out to the application code I think that'd be really good and because I I do think in general SDK should sort of try to present as much data as they can to receive the application even if it's not part of the cloud event stuff obviously they're usually discretion that the you know the to pick and choose what data they say maybe want to exclude but in general I do think that'd be good guidance yes that's just me Scott your hands up yeah I already exposed a uh transport context because it became very clear that you need this data yeah I think but the so the issue is that when you transition between HTTP to another transport and then back to HTTP you need a lot of smarts in that receive adapter to turn it back into the intended request yeah and for like for the C sharp the the pending C sharp revision for 1.0 I think also it's fairly easy to take the the the context from where you from where the cloud event came and link that into the cloud event object so you can basically you need to have a property that's called context and then that has you know either an HTTP request in it or it has an mqp message in it where you can basically then poke around and get at transport errors so that's probably what it's going to be yeah but Scott I think if I understood what you're saying before you wanted you're hoping for a mechanism that said uh some piece of middleware that maybe isn't the way of cloud events can still know about this unknown extension that doesn't have a CE prefix and still know to sort of carry that along in some way right as as a pseudo cloud event thing and right I'm not sure it's a trace idea if you have HTTP to AMQP and back to HTTP the that second HTTP hop is not going to have that trace header because the unless you explicitly have custom code to hydrate it back out of the AMQP message right but with this proposal that I have here I think he would get serialized back out what would be different though is you'd have the original value as opposed to a potentially modified value because of the previous hops right the value comes but you'd have to understand how to pop that out right like CE dash trace header is meaningless oh I see what you mean yes you are correct so then the yeah this is kind of a situation where cloud events is making a bunch of choices that will be incompatible with all existing systems that use HTTP well but it's so I guess there's two ways to look at that one is if it's an unknown extension unless we try to do some really complicated thing and encode the serialization rules into the message itself that says oh if the next hop is HTTP do this for this header if it's AMQP do this for this header which sounds like it's a really bad idea trying to encode that into the message I don't I don't see how you can solve this for unknown headers by receiver but if this middle where wants to act as almost like a proxy kind of a thing it seems to me it's going to have to know how to deal with these headers of this information irrespective of cloud events right I mean to take this trace header as an example right even if cloud events wasn't in the picture it's trace headers either going to get propagated properly or it's going to get dropped regardless of what we do in our spec so it sounds like it's not necessarily our problem to solve but but that's the case of HTTP to HTTP proxies and we're talking about AMQP to some sort of converter to HTTP and that piece has to be custom right but how would you solve that without cloud events in the picture well it would be custom right that's my point if without cloud events you have the right custom code so with cloud events you still have to write custom code but my custom code is not going to work with your custom code and that's the interrupt problem so so but but if we're talking about the since the tracing problem is one that is important in this context that's something that's really up to the trace context specs to solve and not necessarily for us to solve and then I think one of the where it gets clearer and that's something we also had on that we discussed briefly towards the end of the call that we had is as I think I think when you think about event delivery as push and where the message the event travels along a HTTP route through proxies forward then you clearly have this parallelism between you know the the trace context as it originated in the in the in the sent in the publisher and then how it travels along that route and then shows up in the receiver for the HTTP endpoint that there it's pretty clear but as soon as you have a push pull translation where you push into a queue or you push into effectively a PubSub intermediary where you then go and solicit the message out then the context become completely different because the if you pull the message out which means you do either an HTTP get or delete or you do an aim to P receive the receive gesture then that receive gesture is motivated by something else that you start with a different trace context because your your motivation is that you want to have a message and you trace that and the delivery of that message then is part of that trace context because that's what it was motivated by but the message itself the trace context that it contains is the one from when that message was published so those are completely orthogonal to each other that's not true in Knative we use different transports for persistence so wait so when I when I pull a message out of somewhere no matter what no matter where it's stored so I have an event I store that event in a queue or sort of event somewhere on disk and then now sometime later go and and pick that event up the thing that's being the trace the trace context origin for that operation for the HTTP request is not that of the of the event the trace context for that is me picking up some event but I don't even know which event that will be no no we want to know we want to be able to link the producer to the consumer and all the hops yeah but you can't because at that point because at that point when you are soliciting a request you have a soliciting event you don't know which event you're going to get I can look at the metadata of the event yeah but but but you you also need to have a trace path for before you even got that event right you start the operation of the what starts that context is you saying I'm going to I'm going to issue an HTTP get now or I'm going to issue an HTTP delete now or I'm going to go and issue an HTTP receive operation that's where your context starts for that operation you want to be able to trace that through and see how the delivery how the solicitation of that event and the delivery went but that is orthogonal to the trace context that in the event itself that's the publisher sent you because you can't know that at that point you don't know which message you'll get so different systems it's got the massive question I feel fairly confident that this proposal addresses well no let me try to figure out our way of words to say here I think future proofs us well well I think it makes it clear that for these non-special cases where someone wants their own serialization I think requiring everybody to do CE dash that I think works because you know exactly what header it's going to appear or how it's going to appear and there's no place else to look but not to worry about anything I think it says as Jim said it's these edge cases where there's a bit of data that wants to sort of live someplace else because there's an existing spec that we're trying to adhere to right I guess I don't see how how we can possibly solve that without doing all that encoding rules that I just mentioned to you in every single message for every single transport that may exist out there that a sender may or may not even know about and so for example right if I'm sending a cloud event to race GP and I don't know or care about the amqv transport I think you would require me to put some encoding in there anyway that tells you how to encode this over or amqv but I can't because I don't even know about it Scott yes no maybe I think even in amqv I think there's cases where you want to be able to filter on a key that doesn't have a CE prefix because it's existing infrastructure that knows that things that are labeled as prod go to the prod queue or whatever right but I think with this proposal you can still do that as long as the sender who knows about this attribute knows about the special encoding and he could then serialize it in that secondary form right so now you can't use any of the existing middleware that you have that works just with vanilla amqp why is that so I think the question is whether if that middleware updates the attribute if that update should be carried it should be applied to the cloud event or not that's an excellent question yes yes this this proposal is biasing towards if you have middleware that would update the attributes it does not update the cloud event itself correct well actually be clear I don't say that the middleware can't update the attribute all I say is if they're different when they get at their at their receiving end here's how you deal with it because I'm sorry if you have middleware that only knows about the transport and doesn't know about cloud events then any updates that it does will get discarded on receipt correct and well relative to cloud event processing that is definitely true and the reason I was okay with that I was thinking about this last night is because technically any cloud event attribute that we have whether it's source or any other field right if that data is created based upon other data in the message someplace like so for example let's say source was extracted from the body of the message or some other htp header or something like that right but they put it as cdash so they get that interrupt that we're looking for right if some piece of middleware modifies that other metadata there's no requirement on them to modify the cloud event attribute in fact as you said they may not even know about the cloud event attribute to modify it so it's technically possible that the receiver will have a source attribute that doesn't match that other bit of metadata that it was used to originally create or created from and that's fine because that's what at the way every other cloud event attribute is behaves and now we're making extensions work the exact same way you may say it's a problem but at least it's a consistent problem across everything does that make any sense Evan I think that's a accurate statement of the problem I I think there's a second way to solve it which is to say that any of these encodings are transport specific concerns and you use a transport specific mechanism so you say you know hey when you're encoding htp you need to know how to undo htp and if you don't link in the amqp stuff you don't need to know how amqp gets mapped to to and from cloud events but even in that model if there was some way in the htp message to convert the trace header into a cloud event bit of metadata right if the next hop is amqp but amqp if that middle error doesn't know about the extension it's not going to serialize the trace header as anything but a c e dash so in 498 I suggested adding a header something like cloud events mapping for htp that would tell you what mappings the sender did on attributes that would be something specific to the htp transport right the thing that is bridging between htp and amqp is going to be a cloud events bridge I think I don't think there's a general purpose htp to amqp bridging so given that that bridge needs to know about cloud events anyway it can read that header do the transformation and then it you know has a cloud event and then it does a transformation to whatever amqp's rules are so we so objection there is a generic mechanism for this oh there is okay so we have a there's a spec there's a spec that's in flight to become a standard committee standard which is htp over amqp and that actually has that mode so it has a it has a effectively bridges it allows you to take htp semantics and bridge them over amqp I was unaware of that cool however but even if that's the case I still don't think what you described there Evan works because what if the w3c spec says for htp the trace header is serialized as it is in our document but for amqp it's serialized with the z in front I don't know right some other civilization that information is is not available to anybody unless that middleware happens to understand the tracing from the tracing extension right no I'm suggesting that there would be so on the first htp message there would be a header that said cloud events mapped you know trace trace id to whatever the you know w3c header is the recipient would look at that cloud events mapped and say oh did was there a header named you know wc3 tracing or whatever oh I should trans translate that into a cloud events trace id so for the first hop all of those you know all of those traces line up then when it goes to send with amqp it either knows about the amqp tracing extension at which point it puts it in the right place for amqp or if it doesn't it sends it as a regular cloud events header with you know whatever prefixes there are and you don't get traces for that path along the way but you get the tracing information from the first part okay so you're comfortable with losing the w3c-ness on the second hop well if the middleware doesn't know about the w3c extension I don't see how you would get the tracing this got it like I just don't see if you don't know about that extension how you would apply that right and if the second hop was acap would you expect the the middleware to use that that that special header that you added to to the serialization back out again into the special form not necessarily okay so it's strictly on a read it's on a hop by hop basis so that the recipient can understand what the sender put in there okay so it's just for the reading side not necessarily the writing side got it yeah and the writing side has to tell the next person what they did oh if he had if he does know about some other special serialization yes if he knows some other extension that has a special serialization and as a matter of fact for htp we would probably always send a header that says you know data content type got mapped to the content type header got it okay I'm gonna let someone else speak for a minute anybody else want to comment on this I better understand the problem now so thank you Adam just one thing I when the tracing stuff was originally added was the intention that maybe this is what Clemens was alluding to is this meant to trace the context of the event or all the infrastructure that it flows through in between because it sounds like if somebody publishes an event and it bounces through lots of infrastructure along the way I could end up with a received trace context which has loads of stuff in it that I absolutely no interest in is that true I think it depends on what you're trying to get out of that trace if you're wondering why did it take two minutes to get from point a to point b um what it did in the intervening infrastructure is probably really interesting to you but I may not but I may not own that infrastructure yeah I may not have access to that um that all that yeah that that context under the covers yeah if it flows through AWS infrastructure maybe I almost took it my neck out yeah uh before it leaps into our infrastructure um I may not have access to all the gory details of what's gone on under the covers in AWS well I think that you could either send the trace to your you know if you're seeing SLO violation you could send the trace to AWS as part of your proof um or AWS could choose to cut all the traces that are um in AWS infrastructure so it looks like you go into AWS you wormhole and you come out the other side yeah so if we if if we look at tracing specifically as tracing as forming um effectively graphs rather than thinking of traces as a straight line then um in fact what trace context does is a base it basically gives you an indicator of what was the cause for this activity and then what we're doing right now what we're doing with the trace spec if we if we adhere to the the the change that um uh doc just proposed is we're effectively copying the information so um the the the reason for where we're where we're publishing the event is captured effectively in the in the trace context and now we're doing two things we're putting it into CE attributes which is an end-to-end flow which gives visibility to the reason as the publisher had it to the consumer without the intermediate infrastructure in play and then by propagating it into the W3C context header that now lets the um the the HDP infrastructure flow makes that visible and even if that now gets propagated over by W3C context rules without cloud events being involved at all if that now does a transport change going to MQP that will get propagated so you see the entire journey and as the cloud as the event shows up in in a receiver it will be receiving that over some transport you now have two parts of two two pieces of information in the cloud event itself you're going to have the end-to-end application level of view which is simply the publisher did this because of and here's where I will now in just for my app do continue to do that trace and then you have a second view effectively which gives you the journey that that took and that is the information I would then I would expect as as Azure or you would expect as AWS if something didn't work that's the context information that you will want to give the customer to give you to do a root cost analysis while at the same time the customer's own span of control is really what the what happens what happens with the publisher and the the consumer so you would not expect in that scenario that the transport would take the richer context and then drop that into the cloud event I think I think you can choose to do that ads and when you receive the events because trace context is something so as I said I think of I think of the trace context as graphs and if or trees where you can split the context and and and cause multiple things to happen based on on where you currently are and what I'm doing is I'm as I'm setting an event I'm I'm capturing the way I'm formulating an event and then packaging it that's action number one and then I am and I'm publishing it and then I'm also sending it over the transport which is action two and I think of those as distinct activities and the send operation gets traced and then effectively me having sent that event or and then causing things with it at the application level is a second thing okay so you can shoot so so but to finalize finally answer that so at the receiving end you can now choose whether you want to override the the original context and and take the entire context with you or whether you only want to do the end-to-end thing with ignoring the transport journey okay I don't think we're going to get this resolved in the next six minutes so obviously we have to have more discussions around this offline in the issue and probably next week so let me ask this question because this doesn't impact all extensions it's just as jem said those edge case extensions that are have their own specialization which hopefully would minimal but we at least have one we know that do people think this is a requirement to solve before we approve release candidate one for 1.0 or can we go release candidate one without this being resolved I I think well and currently if I'm wrong I mean does it is the subtlety here is that we need to change the spec for that extension to make it clear that it should be doing both yeah well I think no matter assuming we just don't close the issue I assume we want to do something with the issue I think any PR is going to not only require changes to the spec but also to that extension spec yes I think that's true so the question is I'm not saying do we need it for 1.0 I think we do need it for 1.0 I'm asking whether we need it for 1.0 release candidate one I guess my question is we want a solution that also works for data content type you're breaking up there a little Evan can you say that again we want a solution that also works for data content type you mean because because the information is duplicated well because data content type is mapped to a special header and at least HTTP yeah with my proposal they would now get a CE header as well yeah yeah but my question was is handling is making data content type work the same way in scope it sounds like the answer is yes I would think so because I personally like consistency but that's just me I am fine with that oh I guess your point is it's not just a weird extension thing it's a weird extension thing that's also in our core yeah okay so what do people think is this a requirement for release candidate one or not I have an opinion but I want to hear someone else go first yes it is okay anybody disagree with making a requirement for release candidate one okay yeah Evan to answer your question I agree it is a little bit odd well not odd but I was hoping it would be sort of an outlier thing that wouldn't affect the core but you're bringing up content type is a good example of maybe it isn't just this outlier thing okay so is there any objection then to tagging this is a required for release candidate one which means we oops I'm looking at the screen which means our current plan here is a pushed out at least another week that's the implications of that decision any disagreement with that okay um in that case three minutes left oh I closed this issue consider removing the web hook spec because I couldn't think of any place else to put it Clemens I haven't Clemens and I have not any chance to talk about it and Clemens I think you've done some thinking about this and couldn't think of a good home either so I think for right now I just it's okay to close the issue we can always reopen it later if we really want to I don't think there's anything preventing us from moving the spec later and deprecating it but I want to bring it up here make sure and no one had any objection with with me closing that okay okay so anything else people want to bring up before I go back and do the final attendance we have two minutes left okay um glad or someone ping me I'm fine who was it okay glad I'm here I got you uh Javier are you there I'm sorry what was that oh I just said thanks oh yeah okay Javier are you there yes yes okay yep and Doug are you there Doug M you have a company Doug okay we're waiting um okay anything else people want to bring up on today's call okay in that case technically this call is over if you want to stick around for the SDK call I suspect it'll be very short