 Good to have you here. Hi, how are you? Good, good, good. I think we probably have everyone who's going to attend with a handful of people from serverless Inc. So, I think we're about half the group here. But we're very interested in this. We're doing a lot of work around this already. So it'll be good to get some clarity around the cloud events SDK. Nick, I don't think we've had the pleasure of meeting before. What company are you with? Excuse me, I'm with VMware. I'm a stepped in for Carol. He went and paternity leave. Yep, okay. Fantastic. Okay, well, here's where we left off last time. In these conversations, we put together various initiatives, things we'd like to address within the SDK, various scopes of work. Then we prioritized all of these. And actually I'm gonna go ahead and share my screen so that it's easier to follow along. Let me know when you can see this. Yep, you see it. Okay, great. So we put together a list of initiatives, various things you want the cloud events SDK to do. And then we prioritize those by order of importance. And from this list of initiatives, then we drafted up a roadmap based on various cloud events SDK versions. And we fit these initiatives directly into this roadmap right here. So real quick, I think a lot of the people who are on this call weren't in those previous conversations. And given it's been a little while since we drafted these initiatives, I'm wondering if anyone has any comments or questions about these as they are written right now. Doesn't sound like there are any. Then let's go ahead and look at the cloud events version in 0.1. And here's what we decided were priorities first and foremost. And again, if you have any questions or comments on these, just speak up. So what we're going for is we wanna initially support at least three languages. There are a handful of people who volunteered for this. Mathias, I think, yeah, he's from Red Hat. His team wanted to work on the Java implementation. Carol and Mark Peek from VMware volunteered to work on the Go implementation. Our company volunteered to work on the JavaScript node JS implementation. And then a gentleman named William who hadn't attended these calls but has been working behind the scenes on the cloud events SDK, wanted to continue working on the Python version. So we'd like to have at least three versions. And given if these people are still on board to contribute some work, hopefully we'll have about four. And then the first thing we want this SDK to do is instantiate cloud events easily in code for the current cloud event specification. And this would be almost kind of like a class-based system, at least in languages that support that where you could easily create a new cloud event that already has same defaults in it. And you could simply shape it very easily to put in your custom metadata as well as your event payload. And I have some examples of that below, but first I'm gonna go through all these briefly. Next up was design a versioning system which can handle multiple versions of the cloud event specification. You know, we have a few different moving parts here that all have different versions, which we have to consider. And that is the SDK and the specification first and foremost. So what we were thinking here is to practice semantic versioning with the major number for incompatible API changes, the minor number when we added functionality in a backwards compatible manner, and add in support for additional versions of the cloud event specification. So the goal was when you actually instantiate the SDK, you specify the cloud event specification right when you first configure it. And the cloud events SDK should be able to work with multiple specs. And then lastly, patching whenever we make backwards compatible bug fixes. So this is how we were thinking about handling the SDK and the specification versioning issues as of right now. However, there's still some other issues around the transport specifications and now we're going to handle that. And then lastly, support for extensions. You know, we feel that extensions are at the core of building community around cloud events. This is a place for vendors, open source communities and end users to plug in specific information into the cloud events envelope so that they can use it more easily. And we think that experience should be baked into the SDK. There's a bit of confusion as to what the fate of extensions was going to be. But now that we know what that's going to look like, we also want to consider first class support for extensions within SDK from day one. And it's my personal hypothesis that we can actually use cloud events SDK extensions in the SDK if there's a concept that would allow inheriting other libraries or importing other code modules into the cloud events SDK as extensions. This could be a solution to supporting different transport specifications and perhaps handling some of our versioning issues there. So just putting that out there for thought, but this was what we scoped out for version 0.1 of the SDK. Does anyone have any questions, comments or thoughts on these? Yeah, so are you assuming that the, at least for the major version numbers that the SDK itself will sort of have a single version number for all the various languages? So for example, what if say the Go version makes an API incompatible change for some reason, but the other ones do not? How do you see that being handled? I believe it sounds reasonable to assume that each one of these things should perhaps be versioned separately. And given the cloud events specification version itself isn't really related to this versioning, you're actually specifying it at the beginning of the SDK or whatever you configure the SDK. I think we should be okay in allowing these things to be versioned separately. Okay, yeah, but that was my assumption as well. So just to be clear, so each line with specific SDK has its own versioning and the versioning number is not necessarily tied to the cloud event version number. They're both versions independently as well. Sound right? That sounds right to me. I'm concerned with all these moving parts that if we interconnect the versioning somehow or intertwine them, it could create a bit of a mess or like a blocking situation. Yep, I agree. Okay, cool. Just wanna double check, thank you. So maybe when I call this out in the actual document itself somewhere where we can kind of decide on things or these types of decisions that we've made and specifically call out why. Documenting the why, always a great suggestion. Thanks, Brian. Sure. So I'll put that in here. At this time, we've the case should be versioned specification and separately from each other. I'm gonna refine that later. But Okay, great. Any other comments on the scope for cloud events SDK version 0.1? Pretty quiet group. Just to go over version 0.2 real quick so we know where we're heading next with this is to actually assemble, be able to assemble cloud events that are coming from various transports and encodings. And we're not sure how we're gonna do this yet. This is why we kind of punted on this a bit to 0.2 and we're kind of waiting for some of these specifications to mature a bit. And then after that was to be able to instantiate cloud events easily via event schemas. So being able to have an SDK method to easily create a new cloud event by and while allowing for just dropping in like a JSON schema or something. Anyway, that's coming up next but that's a little ways out. I think these are good first steps for us and just getting this done allowable probably enable cloud events to be used a lot more easily by end users. So that said, we have a few people working on implementations. Doesn't sound like any of those people are here on the call unless is anyone on the call planning to actually volunteer or contribute? So this is Nick. I've been working on the Go version of the SDK. But I think for me, the only thing that a thorn in my side at the moment is the fact that the default server HTTP implementations in Go mangle the headers, right? The HTTP headers are supposed to be case and sensitive. So it essentially assumes that and canonicalizes what they call it the headers which means there's a particular issue with extension headers. So anything that's unknown in the spec, I can't properly case the the properties within the JSON or whatever. So if I get, for instance, the HTTP binary encoded cloud event, the headers that I get from that, for all of the known properties, I can make the determination just lower case everything in compare against that. But if there's any extensions, there's no way for me to know how they had cased those. So for in the JSON, if I were to change this to the JSON encoding, I wouldn't be able to properly case those for whoever's going to consume it down the line. So that's the biggest problem that I'm currently dealing with is that I can't, I don't know how to deal with extensions and their property names. Are you trying to map them to properties within the struct that the JSON is being parsed into or are you just trying to place it into a map? So at the moment, I'm just placing everything into a map and assuming that all of the keys are just lower case anyway. It's when I want to then take that and produce a new cloud event to send on the wire, if it's supposed to be a JSON one and whoever wrote the extension property in the first place had intended it to be camel case, before I even got to that point, I've lost that information. So I know that there is an issue in the spec repo that talks about this and I think it got lost in the discussion of extensions in general, but it would be good to perhaps bring that one up with the larger group. Yeah, I'll have to play around with that because I'm pretty sure I've dealt with this in some of my playing around with this stuff. I can't remember how I did it off here. I'll have to take a look at it. I think that just the bigger issue in general is that headers are supposed to be case insensitive. So I don't know if maybe a proxy would get in the middle and change all of the headers to lower case or any number of things could happen with those because they're not defined as, in the HTTP spec, they're not defined as case sensitive, which just makes it a little bit hard to assume that we're getting, you know, assume that our keys for these extensions will be case sensitive. It makes it just a little bit difficult to actually implement. Yeah, it sounds like we probably can influence that much in the SDK group and that's something we got to take up with the, that's an issue we have to file with the HTTP specification, right, Doug? There is an issue and I can... Okay. Link it. I mean, something we could do in the SDK though is that in the event that these are case insensitive is to auto format before sending over to the HTTP protocol. So we could provide whatever the guidance is or implement setting everything to lower case, et cetera. Yep, great suggestion. Let me write a quick note about that. Just curious, Nick, what is your SDK doing right now exactly? So at the moment it's reading into a struct and then using a separate map for any parameter or any property that's not known, right? So if it's not event type or event time, it's throwing those into just a separate map so in the kind of on-martial phases it knows to search the event for the known types, put those into kind of known struct fields and then add anything else it doesn't know about into the kind of extensions map and then just has a generic get set like a map directly on the cloud event that allows you to get to those extra extension properties. So just a question from my end, are we imagining that the SDK is not only used for generating events and sending them to the HTTP protocol but also for doing the on-martialing from the HTTP, whatever the native HTTP implementation is, to a cloud event again? That's what I was kind of thinking and I wanted to just learn about Nick's use case to see what he was doing with the SDK and as to whether or not there should be a priority. It kind of seems like it is a priority in my opinion. Yeah, I would agree. I just, I think it's the first time and potentially it's been brought up as a use case which certainly makes a lot of sense. I think most of the time we've been focused on the client side of actually submitting the event to the HTTP protocol. Well, we have this here as our second initiative, assembling cloud events for various transports. So this is sort of covered and it's specced out in, or it's prioritized in version zero dot two. Perhaps we could bump it up to version zero dot one. It's my general opinion that the people who are doing, working on these things voluntarily can kind of move at their own pace. And if they can get further along and support some of these later stage features, I think that's totally fine. I think perhaps the best thing we can do here as to do the, when we do these SDK check-ins is just to kind of align on the design overall and make sure we're moving in a uniform direction. Sure, it makes sense. Austin, a related question. Is the scope also include converting the provider specific event into cloud events to be published to the Vingetway ribbon grid? Yes, that's something that's very, that's very interesting. And I know that we've discussed it. Yeah, here we go. So this was actually pushed out to our fourth biggest priority and that's transforming existing events into cloud events via transformation mappings. And yeah, I think there's a lot of interest in this. And again, just an opinion and gut feeling. But if we made this really easy to convert stuff like S3 events or Stripe webhooks or GitHub webhooks into cloud events, it would greatly help this ecosystem. But again, I think that anyone who's working on these can move at their own pace. And just as long as we kind of have some agreement or some general acknowledgement as to how this stuff is actually gonna work within the SDKs at the end of the day. So the experience when you switch languages is not totally different. I'm just curious if is number four meant to be a core implementation or does four really fit into five? I think a lot of stuff could fit into five, in my opinion. Perhaps maybe some of these things too. I mean, I guess the way I kind of would imagine is at least like whatever is called out in the core spec for protocol purposes like HDP, APQ, et cetera. I think that would make sense to have a core implementation for and that the SDK would support those out of the box. But when we start getting into things like you're converting a specific provider's events that really starts to make sense to me to live as an extension instead. Yeah, the reason why I'm suggesting that this could be addressed as extension is just because these things have their own specifications and they're not like hardcoded into the cloud event specification itself. There's a cloud events HDP specification and this has its own versioning. And I was thinking that perhaps we could support these via extensions just as a way to get around the versioning issue. However, it's gonna be difficult because these things I think these specifications are binded to specific cloud event specifications. So perhaps that won't work. Right, exactly. And there's also the side of, I think you want to at least provide a fair amount of usefulness out of the box for the SDK without having to do too much configuration. But I mean, there's also this side of bloat too. Like do you want an SDK that has every one of these things implemented and then that could result in bloats? It's got pros and cons about directions. Doug, do you have any thoughts on that, on this issue with also specific regards to versioning? Hey Doug, if you're there, you're on mute. Maybe he's not there. Nick, just curious, did you think about the versioning concerns at all when you started building out some of this just for the HDP spec? So there is one thing that I've kind of been considering and that's in the case that this SDK is going to be used in kind of a middleware capacity. When it gets, let's say this version 0.2 with the spec of the actual cloud event spec, we're compatible with 1.0 but we're also supposed to be able to at least pass through the 0.1 stuff. The question for me then is kind of in doing that, what version of, what version do we say the new event that I, you know, as a pass through? What version of the event is that? I would assume it'd be whatever was the one that came in. I think it would just have to have very hard rules about when the spec has breaking changes, we kind of essentially have to say, well, this is too high of a version. I can't do a pass through with it. So that's the only kind of major thing that I thought about in terms of versioning these is the question of when we're passing through stuff that is a higher version than what we know about kind of what are the limits on that? And it has backwards compatibility concerns as well, right? Right. And that's, we're probably gonna get a lot more older events than perhaps events that are newer than the SDK, although I'm not sure it's highly dependent on the use case. Right, then whoever's updating the SDK. Yeah, that's a tricky one. I mean, I think we talked a little bit about this before too. I mean, you can specify obviously the version of the protocol that you're trying to adhere to inside of the SDK, but I think in reality, it's really, I mean, you may be trying to deliver to multiple different sources and you may want it to be able to kind of toggle on demand. I almost feel that the version of the protocol that's trying to adhere to should really be able to be retrieved from the protocol on its own. And then the SDK can make some decisions about how it's meant to deliver that event to it. Okay. All right, we still have some, it sounds like we have some major versioning concerns that we should probably propose a few solutions or at least clarify the problem to the working group. So they could advise because we do have a lot of moving parts here. It might be worth maybe generating a set of use cases from the SDK's perspective, such as delivering to an HTTP endpoint where it does not know the protocol, delivering to multiple different versions of the protocol. Yeah, you're kind of walking through what we potentially see the combinations of use cases are and then we could use that to open up an issue in the actual Cloud Events spec, we hope. In the situation where the Cloud Events SDK is publishing to some type of destination that doesn't support the protocol, is that, do you think that would be a common use case? No, no, it's not that it doesn't support the protocol. It's that it doesn't know which version of the protocol it adheres to. So imagine like, let's say I wrote a library that people can use. And you can configure the HTTP endpoint where that Cloud Event is going to be delivered, right? So it doesn't necessarily, the library itself doesn't know which version to use out of the box, right? It would be based upon the endpoint that actually gets configured. Got it, okay. Okay. Through having some use cases is a great suggestion and it'll help us clarify a lot of the things that we need to be concerned with. Yeah. Brian, do you have any kind of simple solutions off the top of your head for ways we could move forward on this right now with just the basic MVP design? I have to think about it a little bit. I'm happy to draft up some use cases in this talk that we could potentially review in the next call or whatever. And then I can spend a little bit of time thinking on what some basic solutions might be. I think mainly though, I think this is definitely an issue that it's not just an SDK issue. This is potentially certainly a protocol issue as well. And it may be worth surfacing that to the larger group. Yeah, that's what these issues sound like mostly. And it's good that we're running into these because everybody needs to be aware of these. Okay, Brian, if you could draft out a couple of those, just put them, if you could put them here in the next steps section somewhere. Sure thing. That'd be great. Awesome, thank you. And then as far as what we can actually tackle today, it sounds like we could still do some of this like basic instantiation defining getters and setters. What else can we do? Mocking up cloud events for testing. We kind of threw this out there. We're not sure all of what that entails. Although I will say from the serverless frameworks perspective, being able to easily mock events and design tests around them would be super valuable. How about the extensions design? Perhaps we should chat about this a bit because this could be something that we can make progress on right away too. The way we have been talking about it, at least the way we talked about it a while ago at serverless Inc is these extensions are almost like middleware for the cloud events SDK. And that when data comes into, when you assemble a cloud event or at least when you're publishing a cloud event, there should be extra code modules that you could require in and load within the SDK to consistently add metadata as well as do a handful of other things. So the extensions experience in the SDK may not add metadata, it may or may not, but it could also add functionality to the SDK. And we're interested in this because we have a middleware product, we have the event gateway and we'd like to build an extension that allows people to easily just publish events via the cloud events SDK by simply adding in our extension. And if they do that, we'll probably also add some metadata to the cloud events envelope regarding that works with the SDK and helps generally dispatch the event and route it. So that's our use case and anyway, so does anyone have any other thoughts on extension design and what this could look like? I think you can hear me now. Hey Doug, yep. Hey, sorry about that earlier. Yeah, I was, I kind of envisioned this as being sort of like the person who invokes or sets up the SDK kind of sets up a list of handlers for inbound and outbound processing of the event. And then as you said, it can do whatever it wants to with the event, whether it's to actually modify it as necessary or to do additional processing doesn't really matter. It's up to the handler definer to define that. But the SDK just to have the basic infrastructure to ground the plug in a list of handlers both inbound and outbound. Mm-hmm. That's generally how I thought about it as well. Brian, did you have any additional thoughts on this? I mean, I agree. I think I really compare this to, you know, middleware similar to middleware that goes into something like Express and in reality, you know, I think each handler gets an opportunity to be able to manipulate the event and then passes it to the next middleware in the line. I think that's probably going to be one of the simplest implementations that we can come up with, at least for the JavaScript world, how that looks in the other SDKs. I think it would be a different question. Yes. Nick, do you have any thoughts on what the extension experience should look like for the good implementation? Excuse me. So what you're saying sounds very reasonable, kind of being able to wrap the actual kind of send, receive in a number of middleware layers. I think the one thing that we would want to make sure we do is have a way of defining which of those layers gets priority, right? So which one gets, you know, if we're going to have one middleware, you know, tweak the payload in some way, another way tweak it in another perhaps overlapping way, there should be some way for a user to have a little bit of control over which of those happens first. But I think otherwise that sounds like a very reasonable approach. Okay. In my experience architecting extensible systems, I've never found a silver bullet solution to how to load all the modules that offer that extensibility in a way that they don't totally conflict. I mean, usually the solution that I've seen is just you, you know, the user can specify the order in which each are loaded, but as far as whether or not they'll all work together very well and not create conflicts, it's always a bit of a challenge. Does anyone have any other thoughts on how we could approach that problem? Okay. And Brian, you were suggesting that the extensions could actually add in transformation support. Is that correct? Yeah, that's what I was thinking is that the transformations themselves, I mean, whether or not we decide to make transformations a core piece versus making it an extension. I think this is also a question, but one thing that could happen obviously is that the extensions could have the ability to be able to plug in transformations and those kinds of things. So yeah, we could play around with what looks best there as we go through the SDK implementation. Interesting. Okay, great. One question regarding versioning. So the extensions are also going to have to be able to deal with versioning and understand which version of the event they are dealing with. I mean, I'm guessing that that information will obviously be on the event itself, but I don't know if there's anything specific that we would need to consider in our extension design for easing that concern. Yeah, good point. Definitely another moving part here. It seems like a possible solution would be in allowing, perhaps allowing the extensions of course to inspect the version on the cloud event itself and also expecting extensions to be designed so that they can handle multiple versions. Cause at the end of the day, it sounds like dealing with multiple versions of cloud events is the inevitable outcome of a lot of this. Yep, totally agree. I think the one thing that I would say is that it is very likely that you might end up dragging an extension that does not know how to handle the version of the event that is flowing through it. And so what we might want to be able to do is allow for extensions to indicate which versions of the cloud event spec that they handle so that it can call out just during initialization that hey, I'm incompatible with this type of version. Yep, okay. That's a good suggestion. We're gonna surface all these versioning issues on Thursday. So Doug, if you could definitely put us on the agenda to bring these up and get some feedback from everybody. That'd be helpful. Yep. Okay. All right, we covered most of this and we surfaced, I think our biggest issue is just in handling versioning. That's the biggest source of uncertainty right now. And it's definitely gonna be a blocking issue until we figure that out. And hopefully once we do so, we'll all implement our SDKs in accordance with what we decide. So that should make things a lot easier because at the end of the day, there's a lot of different moving parts. Every single one has versions. And if the SDKs go out and handle this separately, then I think it's only gonna create a bigger mess, at least in my opinion. Okay. So Nick, you are working on the Go SDK right now. And are you comfortable, would you be comfortable with putting that in the cloud events organization on GitHub somewhere? Absolutely. Yeah, that is a priority for us here is to get that out of, currently I have a PR against Carol's initial commit in our dispatch framework GitHub repo. But yeah, I think we would absolutely want to move that into the cloud events SDK repo. Okay, okay, great. And the Googlers have been pretty quiet on this call. Are any of you working on anything with respects to the cloud events SDK? Hello? Rachel? Can you hear me? Hey, Rachel, I can hear you now. Yeah, we're not working on it. Okay. Okay, good to know. All right, I'm gonna circle back to Matt to us from Red Hat, they're doing the job implementation and then also track down William who's doing the Python implementation. But I think now's a good time to start putting these into the GitHub org somewhere. So everyone knows where to look. Hopefully this will increase engagement from everybody else. Okay, so we reviewed priorities and versions, checked in and ongoing work. On our end, our company wants to focus on the JavaScript version of this. We're also very open and eager to put this into the cloud events org on GitHub. So just so everyone knows, we discussed getting to get our repos on GitHub. We discussed some of these versioning issues. We have to raise these within the context of the call on Wednesday. We kind of discussed how extensions will work within the SDKs. As for next steps, I propose that we all go out and do an implementation based on the information that we have on the things that we've discussed here. And at the same time, on a parallel path, we try and figure out how to handle the versions at the end of the day. And hopefully we can make some progress on that on Thursday. And then we schedule another call just to check in on progress and see if any other issues come up and figure out how to move forward from there. That sounds good to me. Okay. Hey Doug, is there anything else we should consider from your perspective on the SDK side? Can't give anything on hand, not like you got everything through all cover. I think to gain the repo in place is probably the first step that we do is to find a check-in code. Okay, great. I'm taking down some notes about that. And I will propose a few ideas for that on Thursday as well as raise these versioning issues. With that, I think we're good to go here. Does anyone have any other last comments or questions? Go on once, go on twice. Okay. All right, well, thanks for joining everyone. We'll sync up again on Thursday and figure out how to resolve some of these issues. Hi everyone. Thanks very much. Yep, thank you. Take care.