 All right, it's 20 past the hour, so it's going to get started. Hi, everybody. My name is Doug Davis. I work for IBM. I'm one of the co-chairs of the Cloud Events project, which is a CNCF sandbox project. And basically, this session is going to be a very quick overview of the history of Cloud Events, what it's all about, and where we're headed in the future after that. All right, so first of all, a little bit of history of where it's got how it got started. About mid-2017, the Technical Oversight Committee inside the CNCF decided they wanted to do a little bit of investigation about what's going on with serverless. It was a relatively new technology, and they weren't really sure whether they want to do anything with it at all. So they started up a serverless working group to basically do that. They wanted to do an investigation and produce basically a white paper with a set of recommendations. And so basically, the white paper did exactly what it says here on the screen. It is an overview of the technology, state of the ecosystem. So what is serverless all about? How does it compare with other systems like FAT, I'm sorry, not FATs, Containers of Service, IaaS, and those kind of things? What is the state of the ecosystem? What's going on up there relative to open source community activities, as well as proprietary activities? And then what are the possible next steps for the CNCF to actually do about this, if anything? That was all within the white paper itself. The second piece of the thing we produced was a landscape document, which is just basically more like a spreadsheet that says what's out there today, both proprietary and open source, just so that people who actually want to get involved in that can see what they can choose from. We weren't making any recommendations or anything like that. Just an overview of what's out there today. But what's important is the recommendation section more than anything else. Because in there, we listed a couple of different things. A lot of it revolved around education for community and stuff like that. But one in particular was what can we do to help people's pain points or derive interoperability around serverless? And as a low hanging fruit type of thing, people realized that serverless is really event driven type of technology. So is there anything we can do around the eventing space? And that's how cloud event project got started. We were trying to figure out a way to make interoperability for the eventing infrastructure or the transmitting of events a little bit easier for people implementing serverless or using serverless technology. So in December 2017, about six months after the serverless working group got started, the cloud events project got kicked off as a set of the sandbox project, but it wasn't actually made a sandbox project until May 2018. So before we start talking about what the actual cloud events project is gonna be doing, let's level set a little here first from a terminology perspective. So first let's talk about what are events? So obviously events are things or representations of an occurrence, right? Something happened, something gets sent out to another system that says, hey, this thing happened, right? What happens with that is up to that other system, but that's all really an event is, right? It's a manifestation of the fact that something actually happened. So you got an occurrence, generates an event, and then on the receiving side, typically there's gonna be some action that takes place as a result of receiving that event. Very simple kind of thing. So what is cloud events trying to do with this? Well, as you have these events flowing from system to system, you're living in a multi-cloud and a multi-service world, right? You got these events being generated, they're being processed by different people, they're going from cloud to cloud or even within cloud, bouncing around different middleware, lots of things going on with these events, just try to process them correctly, okay? And unfortunately, everybody produces their own types of events, so there's no standardization, right? All these events get sent out and they all look radically different from each other, unless they just happen to copy one another, which doesn't happen nearly as often. So what we're trying to do is trying to make it easier for the transmitting of events from the sender to the receiver, even if it goes through a bunch of middleware along the way, okay? And what we're trying to do is make it easier to do that routing or transmission or receiving without having necessarily understand the business logic within the event itself, okay? So obviously then the goals here are to try to foster interoperability, right? Facilitate interactions between these various platforms and this is just the first step towards interoperability around serverless stuff. We'll get into some of that later. But first, how are we gonna do this? We're gonna define the minimal set of metadata to associate with an event so that people can use that as a standardization going forward so they can look for these particular bits of metadata in every event and not have to necessarily understand the rest of it. And I'll show that as an example in a second. But the most important thing here is we're not gonna touch the business logic of the event itself, okay? This is not one of those common event formats where we're trying to get the entire world to standardize on this is what an event looks like and here's the metadata you need to fill in. We're not doing that, okay? We're just trying to help get your event in whatever format you want from point A to point B and that's it. And now along the way though, we're gonna define some mappings to common protocols and serialization. I'll explain more about that in a second. So let's talk a little bit about this metadata that we actually defined. The first four you see there, ID, source, spec version and type are the only four required bits of metadata that cloud events actually requires for every single event. These are the four that we decided are the minimum you need to get an event from point A to point B and understand what the event is about and who it came from. So you can do very basic routing. So ID, just a unique identifier. They give it a UID just so you can identify one event from another one and know whether to duplicate or not. Very simple. Source, where did the event actually come from? Basically it's supposed to be a unique URL that says who sent it. Very simple thing. Spec version, that's just the version of cloud event itself so it's pretty much gonna be a static value. Type, this describes the type of the event. So for example, if this is a GitHub push event, the type might be com.github.push. Something very simple is like that. Nothing too complicated, just a bare bones minimum. Now these other fields down here are extra bits of metadata that we thought might be useful but not necessarily required. So for example, you may want to know at a high level what is the content of the business logic itself, right? So you don't necessarily understand what the business logic says but you may want to know for example, is it XML versus JSON? Because maybe you want to write it in different places based on the encoding type, stuff like that. But again, those are all optional. If they're in the message, you can use it if you want. The first four are the key ones that we really, really wanted to focus on. Now, transfer finding, that's obviously gonna be how you serialize that from say, or how you serialize these things in particular formats such as JSON, XML, those kind of things or how you populate these things on the wire. MQTT, HTTP, that kind of stuff, right? We have to define not just the properties but where they're gonna appear so people know where to look for them. Okay, and again, don't forget, this is all about just helping message get from point A to point B. That's it, it's really not that complicated. So let me show you a little example just to show you how simple this really is. So let's say here you have a very simple event and we're sending this from what we're gonna call binary format. So basically it's just a generic HTTP message that's getting sent. When you add the required CloudEvent attributes to this, you'll see a spec version. So right now the CloudEvent spec is at 0.3, is a static thing, the type. So here is the string that identifies this is a quote, new item with attitude, maybe a database or something. And it's associated with bigcode.com, right? So this is an event type that this organization has defined. Where did it come from? In this particular case, it came from bigcode.com slash repo and then just some unique identifier. That's it. That's the bare minimum to turn basically any HTTP message into a CloudEvent. Now what's interesting about this is you may look at that and say, big deal is not that much and you're right. It's actually not that much and that's actually the point. If you, what I like to do is think about CloudEvent as the exact same thing as HTTP headers. HTTP headers really aren't that complicated, right? You have, you know, where is the event going slash event, where did it come from? Or I'm sorry, the host value where it's going to and your content type. Nothing in there is really technically about the business logic per se. You could put stuff in there but technically you don't need to, right? It's really just bare minimum metadata to get the message into the processor and you notice what they do with it to then hand it off to the real business logic which is gonna understand the JSON down below, right? That's what the CloudEvent is trying to do. It's trying to add that extra little bit of metadata to help the middleware or the receiver understand what to do with it to pass it on to the next step in the processing logic. That's really all it's doing, okay? So it's very, very simple. And that's actually one of its beauties in my opinion. It is not trying to change the world. It's just saying you have a message, sprinkle with this data and now you get a little bit of more interoperability. Now, as I said earlier, this is the binary format. Now we also, as I mentioned, define how this stuff looks in other formats like say JSON. So for example, if for some reason you didn't want to actually add HTTP headers, you actually wanted everything to appear in the HTTP body, we have what we call a structure format. If you look at it, it's basically the exact same thing, right, you still have the exact thing for header or this is the metadata over there. The only difference here is we added a data property that says the same thing over there, right? So we just wrapper it. And of course we added the data content type, which is application JSON. But notice that's because the HTTP thing is now a cloud events JSON object, right? So all we did is just wrapper it. As I said earlier, we're not necessarily trying to define a global cloud event or a common cloud event for everybody to use. And some people are trying to use this in that way and you technically could, but that's not the point of this. This is just saying if you want to serialize everything as JSON with the cloud event metadata and you don't want this separation you have in the binary format, we do provide this for you if you really want to. Okay? And that's basically it. It's actually very, very simple. And that's all we're actually trying to do is that bare minimum metadata. So where are we as a project? I'm sorry, let me back up. First, let's talk about what was new in version three. Version three was just released a week and a half ago or so, the June 13th. For those of you who have no cloud events that are following it, the subject attribute is new. That basically describes the source, I'm sorry, the source attribute describes who sent the event. The subject attribute describes the event, the thing the event is about. So for example, let's say the source is a GitHub repo, maybe the subject is the PR, there's maybe a new PR got generated, that's the event. That's the relationship between the two. We rename content type, the data content type because there was some confusion there, especially in HTTP because people didn't understand the difference between the HTTP content type header versus the cloud event content type header. Add the word data in the front of it to make it clear this is about the content type of the data attribute itself, meaning the body. A lot of people don't like sending events that are very, very large. So we add this notion of an extension called data rep, so you can actually have a pointer to some external resource to carry the data. And then we do talk about how to do batching, sort of. Basically we punted and said a lot of transports or we do batching, use a transportable batching, we're not gonna define it ourselves. So that's what's new in cloud events, 0.3. In terms of deliverables, a lot of it is already what I talked about. We have the base specification itself, three eventing formats, JSON, AMPQP, and Protobuf. We also have a whole bunch of different transport bindings that tell you how this thing gets represented on the protocol, I'll show you the HTTP, other things in there, AMPQP, MTT, those kind of things. We do have a spot where we call proprietary transports. That's actually something new. The other transports are not proprietary in the sense that there are many different implementations of those out there. There are some specifications though out there that even though they're done in the open, there's really only a single implementation, so they're not really about interoperability. And we don't wanna sort of mix those in with the other ones that are about interoperability. So that's what the proprietary aspect of the transport binding specs means, if you're looking at that. And we do also have a primer that gives you some background in terms of why we made some of the design decisions we made and some thinking behind the specification itself that really isn't appropriate for a spec per se, but it's good non-normative reading. If you wanna understand what we were thinking we made certain decisions, that all goes into primer. And finally, we have the SDK. So if you don't wanna have to generate the headers yourself or the JSON itself, we do have several SDKs and several languages. Take a choice. And we also define some extensions. So for example, the data ref one I mentioned earlier, that's not part of the criminal spec, it's an extension to it. Okay. Now in terms of status, I did mention 0.03 came out recently. Sorry, 0.3 came out recently. 0.3 is a very low number. But don't let that fool you. It's actually more like a 0.9. This thing's almost done. We are just very conservative in our numbering scheme. But everybody on the team basically says, you know what, let's wrap this thing up and we're hoping to actually shift 1.0 within a matter of weeks, hopefully. That's a goal anyway. We'll see what happens. We are also in the process of putting together a proposal to go from Sandbox to Incubator. Really it means nothing for us other than we get to leverage the marketing machine inside CNCF, because they don't like to do a whole lot of promotion of Sandbox projects. So at the incubator stage, we'll be able to do some more marketing, that kind of stuff. In terms of adoption, a kudos to Microsoft and actually get serverless is about the same time. But those two came out first with support for cloud events. SAP is another company that's been able, that allows me to mention they've done it. We do have other people that have plant support for it. I'm just not allowed to mention their names quite yet because they haven't announced it. But in terms of open source support, Knative, how many people here have heard about Knative? Okay, fair number of you. If you played with their eventing side of things, they use cloud events under the covers. Every time an event comes into Knative, they convert it to a cloud event to send it onto the rest of the infrastructure. That way when you do filtering or you're using triggers and stuff in the eventing infrastructure, or when it gets passed onto your eventual service, it's all turned into a cloud event. So they get a standardization as it goes to the middleware so that you don't have to understand this is a, you know, a particular type of event. Therefore I need to look in different spots from other type of event in order to do filtering, right? By turning into a cloud event, you can do a single filtering language type of mechanism inside Knative and you don't have to understand the content of it. And that's the whole point of cloud events. So we're very excited by Knative picking it up. And as I said, we do have other big cloud providers that are planning on supporting it. I'm just not allowed to mention their names quite yet, but there are some rather big ones out there. After 1.0, so obviously after 1.0, if there are additional work items we want to work on, we'll work on those. Hopefully they won't be spec-breaking changes, so it'll be minor increments. But most of the group will actually switch back to the serverless working group and we're gonna start looking at what to work on next in terms of interoperability. So for example, workflows, another one that we're considering. Actually we've already started the work on workflow. It just went on pause for a while while we try to finish up cloud events. But if you're interested in what you can do in terms of interoperability for other things besides the cloud events, look at the serverless working group. And in fact, I'm gonna talk about that later today at four o'clock in this room. Talk about what we're doing in the serverless working group. All right, that one actually lasts faster than I expected. So we're gonna do a demo. It's a little scary because I don't trust the network here, but it's an audience participation demo. So I need you guys to participate in this. So if it fails, I'm gonna blame you guys, not me. Just to let you know though, we have done other demos in the past. The very first one was actually kind of cool. We had two different event generations, AWS Blob Store and Azure Blob Store. Went through some middleware that was sent out to a whole bunch of different functions. Each of the functions would manipulate this image and then post it to Twitter. That was, yeah, that was the Twitter one, yeah. So that was kind of cool. We had another one which did something similar. I'm sorry, no, this was another version of that one. Finally, we actually did a Mad Libs one where we actually sent out an event that says, how many people are here familiar with a child game called Mad Libs? No? Mad Libs. Basically what happened is you put up a sentence and you leave these blanks here. So instead of these words, you see like noun, verb, adjective, those kind of things. And what we did is we sent out an event that says, I need a noun. And all the different functions out there replied with either noun, verb, or whatever. And then the system would pick one. That's what the red means, and fill it in. And because they're random words, you would get funny sentences like this. So that was another thing just to demonstrate a little bit of interoperability using cloud events. But today, what we're gonna do for a demo is look at an airport ecosystem. So an airport in many ways, almost like a small city. You have a lot of different participants involved in going on with the businesses involved. You got customers. You got a lot of infrastructure involved. And we actually been working with the Akra-Semitic model from I can't remember the name of the organization, of course, right now. But it's an airport consortium that's basically trying to look at the airport ecosystem and break it up into more of a microservice architecture. And obviously they're gonna have events flowing around and they're looking at leveraging cloud events to better enable that architecture for this common event bus type of infrastructure they're doing. So they're hoping the cloud events can help them from an interoperability perspective. So our demo is gonna be around an airport ecosystem. Now, get out your phone, scan the QR code or go to that URL while I ramble for a minute. Basically the point of the demo here is on the bottom you're gonna have some retailers that's basically gonna be coffee shops, okay? Up top you're gonna have suppliers. So when the coffee shops run out of coffee cups, the suppliers will send new boxes of coffee cups. On the right hand side, right here, we have carriers, basically trucks. The trucks are gonna go from the supply to the retailer and then back home whenever new coffee cups need to be delivered. Kind of a hokey scenario, but it represents the idea of events being generated by based upon different actions and then people respond to those events and notice you have different icons there because every participant uses a different language and it's from a different company. So it's not all implemented by the same person. On the right hand side, very right hand side, you see the list of cloud events as they get generated. So let me go ahead. Did everybody manage to get there? Sourcedog.com slash airport. Yeah, it should say you're waiting, right? Okay, I want you guys to go too soon. So here we go. So let me just do a quick demo of what it should look like. So down here represents what you guys should see. So I said I want to pick the IBM coffee shop, goes there. I'm gonna pick medium, or actually I'm gonna pick small. I get my coffee and then I walk away. Now I get to go again, so let me do it again. Now what happens is it's still gonna pick IBM coffee, but this time, should point out these little bars here represents the supplies inside. So I think it's like one through three or something like that. When it gets down to one, IBM coffee is gonna say I need more cups and you see a little pop-up there saying I need small cups or something like that and then you'll start seeing things flying around the screen. So if you wait too long, he kicks you out. Let me do that again. Hold on, IBM coffee. So let me do, whoops, something weird happened there. Hold on, now. Hold on, IBM coffee goes small. So I get my small coffee, go away. But notice a little bubble set that came up that I'm out of smalls. We get delivered and the bar at the bottom got filled back up. Okay, so now, let's see if this works, you guys should now be able to do it. So if you do it right, you should be able to flood the system and yes, you can jump. And as you get low, you should start seeing a whole bunch of events or trucks go flying around, okay? Now, I know it's kind of goofy, but it needs to get you guys involved and actually awake, right? Oh, yeah, oh wow, you guys are doing good. Okay, so let me just quickly, let me just click on one. So while you guys are doing that, just to show you what, this is a sample cloud event. So this event actually says it's coming from a retailer called StoryScript and this is this new inventory level. So he's back up to three. So he sent out an event to the system to say how many cups he has, right? This is a sample cloud event, okay? So the point of this is the customer walks up, places an order and the event gets sent. Customers order satisfy and the event gets sent. Retailer runs out of cups, the event gets sent. The supplier says, I'm gonna take, all right, I'm gonna satisfy the event that said we're running out of cups, so he sends out an event saying I have a pass to get picked up. This guy's gonna respond saying I'm gonna go get the package, we're gonna get the delivery, he sends out another event. All these events are generated by different people, slight different formats, but cloud events makes it interoperable because they can all just look at the metadata to figure out how to receive it and route it to the right infrastructure to then process the business logic and says what to do next. And obviously something isn't quite going right because there's a long line with nobody in service. I apologize. This is just demo code. I wanna blame the network, but I don't know for sure. Anyway, now the demo just trying to quickly show cloud events actually in action. Again, you can actually see one of them, so actually it's just on the screen. Nope, it's not working, I apologize. Anyway, back to this. All right, and so just to wrap it up, some useful links of the Serverless Working Group, our parent project originally, then some links about the cloud events project itself. We have our dedicated webpage, GitHub repo link to the SDKs. We do have weekly calls that are at noon Eastern time every week if you're interested in joining. Everybody's free to join, you don't have to be a regular member, just join and have some fun. And just a reminder, I do have a call, a talk against day at four o'clock to talk a little bit more about the Serverless Working Group, a little bit of a repeat because I do summarize cloud events in just two slides. So the rest of it does talk about what we're doing in the Serverless Working Group, in particular a little bit about work flow if you're interested in that. And with that, I'm basically done. Are there any questions? Yes, oh, I'll try to repeat the question. Yeah, yeah. Right, so the question is in order for you to get the interop there we're looking for you have to have some little interop around the values of the fields that we define. Great question. Unfortunately, that's not something we're gonna be tackling yet because that's really business processing specific, basically. What we're hoping to do though is to get popular event producers, take GitHub as an example, to get them to either support cloud events natively because they already send out ACP events by not just add three or four more headers and they're done. That way people can process GitHub events without having to understand that it's actually really a GitHub event. They just say, oh, GitHub events go over here. But to your specific question, I don't know if we actually can standardize too much around that because every business applications, for example, type is gonna look different. And so I'm not quite sure how you can standardize around that. GitHub events is gonna say GitHub or com GitHub push. GitHub is gonna be com GitHub merge request or something like that, right? So there may not necessarily be much you can do in terms of your ability there. The closest you can get is possibly say everybody has to do reverse DNS name, something like that. So at least you can maybe figure out whether it's under the GitHub or kind of thing. But even then, I don't know. I just haven't looked at it yet to be honest. If you have some ideas, I'd love to hear it. But we haven't talked about that yet to be honest. Maybe a follow up question. Yeah, there we go. The follow up question then is in the context of the serverless working group, would you be thinking about standardizing on some of those things? For example, if an event from the load balancer comes in, then I need to scale up or scale down my fast service. And that has to be standard across fast providers. I think what's interesting is, while this isn't event specific, we've talked about doing things like trying to standardize function signatures, which is something similar, right? You're trying to standardize what's coming into you or how to talk to the language. And there's a little bit of resistance to that. Not because it's not a good idea, but people thought it was a little bit too soon. So I think the idea you have is actually really good. I would love it, because I agree. If it'd be great if, say, in your example, all load balancers spin up the same event, you can all react consistently. I would just worry that people would be a little nervous about doing it because a lot of people think of standardization as handcuff and creativity, put it that way, right? And we don't want that. So at some point, I would love to be able to do that. And I think if someone wants to propose that for the service working group, we can definitely consider it. I'm just giving you a fair warning that people are a little nervous about things like that. But I think it's a wonderful idea, yes. Any other questions? I'm sure we'll hear again. Yeah, I just heard the cloud event support the GitHub type, so I'm not sure if the cloud event support any other type or can I define the customer type for the cloud event? When you say... Type. Do you have a GitHub support cloud event or cloud event support GitHub? I see you can define the type in the cloud event. Yes. You say it's a GitHub type? Yeah, it can be anything. It can be anything. Yeah. So it's a predefined or... No, the event producer defines what goes in there. Cloud events does not tell anybody what to do. We just define the metadata names. We don't define the values. So hopefully if GitHub chooses to support cloud events, then they will define that when they send events the type will be calm GitHub push, calm GitHub new issue or something like that, right? They will define what that string looks like, not the cloud event specification. So the event producer gets to define those values, not us. Okay. Does that answer your question? Yeah, yeah. Okay, thank you. Just to let you know, in the cloud events repo, you actually will see a poor request to suggest what some look like. So we actually do have one for GitHub, one for GitLab, one for a couple of AWS events, just as a suggestion to get some commonality, because for example, a lot of people may want to write adapters. For example, to convert a GitHub event into a GitHub cloud event. Well, if you're multiple adapters out there, we don't want them all doing different things, filling in with different values. We want some consistency. So we give some recommendations for what to do. You don't have to do it, but we do get some recommendations to try to get some level of interoperability. And I actually have been trying to get the GitHub guys and GitLab guys to review it, and so far they've been okay with the direction we're going. So, let's see how it plays out. Yes, sir. I guess we cannot define destination in a meta. So I just guess, and the cloud event is is suitable subscription model. It is not used for the point-to-point communication model, right? Well, yes and no. You are correct, and it's good observation that we do not have the destination in there. And that's for two reasons. One is we didn't want to duplicate what was already in the transport, right? So in the HTTP case, it already has the host header, it has the path. We didn't want to duplicate that. The transports surely have destination information. That's one. But two, because often events will get passed around, right? Oftentimes they'll go to a piece of middleware, looks at it, sends out to something else, or does some processing to it, sends out to something else, or maybe decides and how do I want to do anything with it, it sends it to a bit bucket, right? We didn't want to have a hard-coded destination as part of cloud events because that destination may only be good for that one hop, right? It's not gonna be good all the way down the chain, right? So to your question about, is it just for subscription versus point-to-point, you can use it for either one. We don't stop you from using it for point-to-point. You definitely can. It's up to you to decide how you want to use it. We don't say yes or no. As long as you can put the cloud event metadata in there, it's a cloud event. That's all we care about, okay? Any other questions? All right, in that case, I guess we're done a little early. Thank you guys very much.