 Okay. Thank you for coming and delaying your lunchtime by a little bit. I'm sure the Pesta will be better today. We're here to talk about cloud events. I am part of the SIG serverless community and we are working on cloud events and this is the working group update. So I'm here on behalf of a bunch of people dedicating a lot of time to try to make cloud events and these connecting bits better for CNCF. I am co-founder of ChainGuard. We are as of today at eight days old. So there was some schedule changes. Okay. So we're going to roll back the time a little bit to circa 2015. And if you think of a standard PubSub model application, you have some sort of event producer and it was producing events. Maybe you have several of them that goes into some sort of event queue. You might have a mediator that looks at the events and says, well, actually it's high priority or it's for production or it's for servicing the ChainGuard t-shirt sales. And it goes to a specific channel and then you have some sort of event consumer, which is your application that's hard coded to consume those events in that particular shape. And so the event mediator might have some complicated logic to inspect the packet and then route it to the correct spot. And the serverless working group wrote a white paper thinking about, well, if we have this problem and we're thinking about the world in what's the future of serverless, one of the issues is that we don't really have a way to talk about, like there's no common language on the event itself. There are all these interesting shapes. And it'd be interesting if we could think about the world from a higher order. So we'll break this problem down and we have some event producer, which is likely third party. You're probably going to check the boxes. I want to get GitHub events or I want to get lab events or I want to get other events from my application. You have this custom necessary evil event mediator thing that is looking at those things and kind of shuffling it to the appropriate consumer of those events because you don't want the event producer to be tied directly to the event consumers because then you're holding the event producer open. And what we really want to focus on as people writing code is the event consumer code that wants to look at those events. So it'd be interesting if we could have enough data on the event to make routing just part of the configuration of the application but to do that we need some standards or at least a specification. So we started out cloud events is a protocol independent event definition so that we can think about the event occurrence independent of the protocol choice that we have for the application. So essentially the first pass at cloud events is it's an envelope definition. So what was an envelope? You stuff it full of something. You seal it and you write extra metadata around like if it's fragile or if it's going to go to another country. The event mediator takes a look at the envelope and routes it to the appropriate spot. So why are we thinking about this? Well I kind of alluded to it already. When we're composing our application one of the first things you have to think about is okay well I'm going to be producing events first choice. Which kind of event broker am I actually going to implement because that's going to get baked into all the layers of my application and it is a very expensive choice up front. So if you go and you extend your data payload you have to figure out how to consume that on all the consumers that consume that event. If you need to redefine the schemas you have to figure out how to do that and you have to keep track of it and what often happens is we lose track of what are these serverless consumer functions are doing with the schemas and the events that they're trying to consume. So it'd be interesting if the event itself could pass along how it should be consumed. But we also want that transport or protocol independence. So we introduced, like I said, the idea of an envelope. So you have the occurrence, you have the event payload. Well we can introduce a common wrapper for it. Now all of the events are unmodified but they're inside their own little box. And now that box becomes the cloud event. And everything between the producer and the consumer don't really need to care what's inside the payload. All they need to worry about is what are the metadata's along the way that get either appended or added by the producer so that the consumer can actually make smart decisions on what it should do and all of the intermediates could make routing choices. So now it becomes really standardized. So we have this picture, like event producer, maybe there's multiple, maybe a couple layers of event queues and then these all these event consumers. Well what happens when we want to make a new event but we forgot to update the routing, the event mediator. We've updated the event consumer on your right side. Everybody's right side. But the event mediator doesn't know about that event and it doesn't know how to route yet because we didn't update that configuration. So if we change this picture to cloud events, now the event mediator doesn't have to change its code. It might change its configuration because those pieces are more dynamic because the how to shuffle that a message around and how to inspect it becomes not something that's hard coded to the application. So before cloud events, your code kind of looks like this. There's the business logic that you care about. There's some custom glue code to tie you directly to whatever transport you're trying to use and then there's the delivery protocol that's things like AMQP or PubSub or Kafka. And then on the other side, it goes backwards. There's some library that talks to that broker, pulls the event off. You've got some glue to pop it out of whatever that format is and into something that you can actually understand. So after cloud events, we've got the business logic and we've got now the new cloud events library. You could write your own, but we have some. And then you have the protocol library just like before. And then the point here is that you're kind of insulated from the event producer, the business logic in both sides are slightly more independent of the delivery protocol. And so the side effect of cloud events is that you get to, you know, it enables an ecosystem of off-the-shelf components to replace all that bespoke glue code in your application. So, you know, that was the same diagram. Yeah, it is. The difference is that the custom glue code, you can throw it out because there's an off-the-shelf component that allows you to adopt another delivery protocol without actually changing your event, the contents of your event. So what does this do? It potentially lets you write code once because if you get that independence of, I want to talk about what's inside the envelope, not the envelope itself, and you bring off this off-the-shelf code, now we can pick several protocol libraries. So now we're sending both over HTTP and Kafka. And maybe there's some magic in our business logic to select which one, or maybe we send it on both, or there's some field in the envelope itself that lets you do some selection. Now I can write my event consumers and use that off-the-shelf glue to consume those events, and I don't really think about the exact protocol that they used to get to me. I just care about what the content of the data is. So it becomes trivial possible. So okay, so what does a cloud event look like? This is for HTTP. Other depends on what kind of broker or binding you're talking about, but essentially maybe this is the most kind of like Amazon-ish JSON envelope with a bunch of metadata fields and then a new data attribute field where all of your content, if it's JSON, goes. And so there's an application data content type that describes the data's application content. So that's interesting, but you say, well, I actually don't want to, I don't want to be able to consume all the cloud event pieces. I'm already using JSON in my production cluster. And so we said, yeah, that is also a good idea. So we have a binary version of a cloud event, and we can take all of that envelope metadata and bump it up into the headers and not actually mutate the payload that you have today. And so if you choose HTTP binary, the conversion from what you're doing today to a cloud event compatible thing would be adding some metadata in the headers and you're off to the races. Now you get to participate in that off-the-shelf ecosystem of all these event mediators and delivery mechanisms and SDKs and all this stuff. Okay, so this envelope, we have four required fields that you need to produce. So spec type source ID. We have some extra defined optional ones, and then you can add your own, and there's a couple of rules about what they should look like that are in the spec and I don't really want to go over here. So an example would be like a GitHub webhook. We have the spec version, which is kind of how we decode the cloud event itself, and then we have some of the metadata for GitHub specifically. And then the data would be the webhook payload, the whole thing. So interesting side note, I was helping the Tecton Triggers team when they were thinking about how they should consume webhooks and things, and I convinced them that they should rely on binary cloud events because they could opt into other things like Argo or Candidate Eventing and get all that infrastructure around how do events get routed and delivered to that trigger to continue the Tecton pipelines instead of trying to do some sort of proprietary thing where they were going to make a scheme in their own standards. Now their endpoint consumes cloud events as binary HTTP blobs, but it also consumes direct GitHub webhooks because they can choose to ignore the cloud events headers or not. So for their particular case, it kind of was interesting because they can do both because binary doesn't change the message very much. Okay. Hopefully this looks interesting to you being able to do transport agnostic conversions between events. We, like I said, we have off-the-shelf components that help you do adaptations in different languages and for, we call them protocol bindings. A protocol binding is you have the event and we can call it the conical event. What a binding is is how to take that event and convert it into the special protocol binding, HTTP, AMQP, MQTT, those kinds of things, send it along that open standard and then on the other side, consume an event, inspect it, figure out it's a cloud event and then pop it back out into the conical form again, which is how you get that code independence. Now your business logic can depend on your language's native representation of the conical event and everything that happened in between should be lossless and you just get to deal with the event itself. Okay. So that's kind of what we had. We shipped a cloud events specification 1.0. I think we've had one update since then for some minor typos and whatnot because they let me write some stuff and there were some clarifications around like what you should do and how extensions work and et cetera. So what have we been working on lately? So recall this, we kind of got to this place where we can still have that PubSub model, but now we can kind of, we can replace the event queues and the event mediators with off the shelf components because we can both produce and consume cloud events and we can kind of plug and play at those pieces with some caveats, right, of course. But there's still some problems, right? Like I have a whole library of event producers and it would be really interesting if I could look at one and say, can you please tell me what kinds of events you produce and what's the outer envelope shape, right? I need to know what kind of mail slot I need to make or whatever and you could think of this as it's kind of like the event version of what an open API document would be, right? Open API is great because you can ask the host, what's your REST interface? It's really difficult to do that with eventing and event systems because you don't necessarily, there isn't a standard yet. There's some attempts. We're attempting to make a discovery API that makes that interface very common. So it's a REST interface and I'll show you a little bit. Well, once you know what something produces, we can actually point you to the mechanism that you can ask of that producer. I would like to subscribe to your events, please. And there's some proprietary things, actual limitations on what the producer is going to do, right? It could be through an intermediate queue or it could be a giant data warehouse or we're trying to leave room for interpretation there. But a common API to let you ask those producers or something on its behalf, I would like to be subscribed to some set of events that you're going to produce. And then there's a bunch of optional, required and optional fields. One of them was schema, data schema. Data schema describes what's inside the payload. So kind of like open API shows you what the responsive API and if you're going to do a post, what shape does it expect? What does it return? Same deal, right? Like what's inside the envelope so that I can potentially do a code gen and get a function that operates on just the core kernel piece of the event payload. So it's a little crazy, but currently the workgroup is working on these three specifications and I'll show you a little hint on what it looks like. We're trying to make it simple. So we think about services, naming's hard. So there's only so many names in the naming field that you can use. But the idea is you can query for the services that this thing produces and from that particular service we'll get you back a set of events with their payloads and a bunch of other things and a pointer to the subscription API which then you can also get your subscriptions, create and then edit those subscriptions. And then we have a much larger, it's much more chatty API for the schema registry and this is really about like, I need to get the subcomponents of the schema and little artifacts and cookie crumbs and that stuff. So these three things together kind of help you, let's see, yeah, oh, sorry, a schema registry. So in practice with this, it can look like a slide that I didn't update. The schema registry, it's basically a JSON data from OpenAPI, similar stuff for JSON schema, right? Not saying that Cloud events can support other things but it's just JSON, right? You can send over XML if you want or you can send a binary blob or you can send an image and decorate it with some headers. So, okay, so the future, if we go back to that big picture, we've kind of been, we've thought about the connection between, you know, the event queue and the event mediators or the event consumers and the queues. We've thought about how you find those producers. We've thought about how you might be able to ask those producers to deliver you stuff but we're still on the table are things like, well, if we're in Kubernetes, for example, what does the shape of the container or the contract from the data plane look like to deliver those things? So that you might be able to just focus on functions because in the serverless space, folks are very excited about just writing code and having it like wrap up in some magic thing. Maybe there's room there to write a standard expectation so that I can just write code and I understand how it's going to get wrapped up into a container and standardized in some kind of runtime. We haven't started working on this yet. There's also, we haven't really worked on cloud events specific security point-to-point because the brokers kind of give you some of that already based on which underlying technology you pick, you could do MTLS with HTTP if you want but it's sort of set up out-of-bounds or out-of-bounds band of the cloud events spec. You should do it if it's required for your application but we don't give you the way to do it yet. And there's probably stuff that we haven't thought about and that's where we could use more community involvement. I think, let's see. So if you would like to join us, we meet Thursdays at these times. For me, it's 9 a.m. It's coffee time. You can come check our website out at cloudevents.io. You can buy Sweet Swag in the CMCF store. We have, of course, we do stuff on GitHub. In the GitHub org, there's SDK- and then the language, there's a whole bunch of SDKs. Those also need love and attention. I personally, I help maintain the go lang SDK and it's a lot of fun and a lot of work and there's a lot of interest in questions that come in but the rest of the languages also need attention. And of course the spec repo where we're writing these specifications and we go out and produce integrations. One thing I'll say that's extra is I said we're working on these three specifications. At this moment right now, what we're working on is demo integrations using cloudevents and the Discovery API and a subscription API and a registry to tie it all into a cohesive story and demos and examples and stuff. So if that sounds really interesting, what we're looking for is folks to read the specifications and produce one or all of the implementations from the definition in your own way as kind of a can you interpret this in the way that we are expecting folks to do it and does it limit you in any way? Because this is a CNCF project, so it's totally vendor neutral and open standards and if there's something that you can't do, we really want to hear about it. So that is the update for the serverless working group and I think we have some time for questions.