 All right, welcome everybody Today Clemens and I are going to be talking to you about cloud events in particular the The status of the cloud events project and where we're headed in the future Hopefully we'll have time for questions at the end. So let's go ahead and jump right into it because you have a lot of material here so First of all quick agenda give you a quick update on where we are with cloud events project And then very quickly jump over to some of the new work that we're doing here So let's start with cloud events itself. So not going to go into too deep detail here But for those of you don't know what cloud events is it is a specification for defining common metadata For events and where that metadata appears in the messages that are transporting those events now It seems very simple the high very high level and it is but we're really doing this mainly to aid and delivery of events from Point A to point B. This is not about defining yet another common event format or anything like that I'll show you an example in a minute. This is simply about aiding in the delivery of events Across middleware to its final destination and most importantly to enable people to do that without having to understand or parse the business logic Of the event itself. Okay, so let's jump into a quick example to show you what that actually means So let's say you have this event flowing over HCP, right? Nothing in here too special. It's like a normal HCP event In order to turn this into a cloud event though You add a couple bit of extra metadata as HCP headers in this case and you can see Just four little bits of metadata. These are the only four that are required the spec version to see the cloud event spec version That is the type of the event. So this tells you for example, whether it's a crate versus a delete type of event Where the event came from, you know, what what is the entity that was sending out the events and then just some unique identifier? Okay. Now, obviously look at that. It doesn't seem too exciting But with that basic information middleware can now route the message appropriately To make its way to its final destination much in the same way you can actually see the HCP header is doing that for the HCP layer Okay, as I said, it's actually a very very simple concept. But with that one little extra bit of information Middleware can now be eventing agnostic in terms of understanding the business logic All they need to do is look for these common bits of metadata to route the event appropriately Okay, very very simple concept But we're hearing lots of kudos from the community how this is making our life easier because they no longer have to Specialized middleware for every type of event that flows to the system Now obviously the final destinations to me to understand the event and the business logic to get its job done But in terms of routing This is the type of information that should be the bare minimum that people need to get their job done Now this example right here is what we call the binary format just adds couple HCP headers So your your original message should remain basically unchanged However, there are some people who wanted to have everything encapsulated inside the body as for those particular cases We actually defined some syntax for example here in this JSON version where we actually put everything into the body itself But you can see it's the exact same data, right? You got the same four piece of metadata the content type of the data attribute and data to be can hold the business logic Right all the exact same information now the content type of the HCP header level obviously tells you that this is not just a normal JSON object or a JSON payload, but it's a cloud event JSON payload So that's how you can distinguish between the left hand side Which is just application JSON, which is binary and the structure side on the right hand side was application slash JSON or slash cloud events last station Okay, so that's it at a very very high level very simple thing We're getting lots of kudos about it and lots of different people are picking it up across the industry In terms of deliverables, this is the big news We did go one point though fairly recently so yay for that and in terms of what we're actually producing we have different specifications not just the spec itself in terms of What the metadata is but also how it appears in different formats, right? HCP versus the MQP that kind of stuff different encoding right where I showed you the JSON We also abro and we also included a primer because there are a lot of technical decisions We made which don't really go into a spec itself But we want to be able to understand why we made decisions we made so we create a primer as a background for people to understand Some of the decisions we made and some of the design choices now We do have some SDKs out there with a whole bunch different languages Which you can see on the screen. Most of them are very very active In particular the go one in the C sharp and JavaScript and Java are very very active So please take a look at those you get a chance. They're not that complicated They're just mainly there helping you to serialize and deserialize these thought events. Okay, so what's next for us? Obviously more customer feedback now that it's out there People tend to wait until things go one point though before they adopt it so we're hoping to get more feedback from there and we have been getting a lot of kudos so far as I said and Have our beyond that though. We're not just you know sitting back and waiting for that feedback We are starting to look at what additional pain points the community has a relative to the eventing space Not just for functions and service and stuff like that But in general, what are the pain points people are experiencing? Okay, and with that let me then turn it over to Clemens It's going to talk about some of these additional work items are doing specifically aimed at addressing some of those pain points yes, and For those we have two areas discovery and the subscription APIs in the schema registry which I'm going to discuss both and What's important to note is that we in cloud events we talk in the cloud event core specification where we Or set of specifications where we have transport bindings and encodings We're really mostly focusing on on delivery of cloud events But that's just the end of the story because before you can deliver a cloud event. You obviously have to Indicate Your interest in that cloud event and then you also have to find who's actually publishing that cloud event And that's the thing that we're tackling in this next round of specifications that we're working on so the first element is how to discover which cloud events are available for Subscription today what you do is you read it read documentation Typically, so you go on on the website documentation website and you find a list of Events that's being raised and for that to be automatable. We need to have a way to You know learn about services be able to filter those services based on some criteria and then learn about which Services exposed with which events or reversely allow a Knowing about some events that you can handle and then learning which services in your vicinity Or, you know, some other criteria are supporting those events So question is questions that we have is who produces events which are presented events are produced Which subscription options are available? How do I get the events delivered to me and then where and how do I Subscribe Next So what we've done here is we're not very prescriptive and that's a theme in cloud events overall It's a principle that we're not prescriptive about how you really how you should implement your service and there's no There may be some references of these things down the road, but ultimately what we're defining here interfaces So we are defining as abstractly a data model that defines for instance here in this in this way what a Service is for discovery and also defines and that's obviously leaning on the core Specification that we have cloud events and that defines what a type is and then in based on this we Then define a HDP and a gRPC API that we have today in the drafts and We might have further protocols such as a MTP Later, so we define an interface and when you implement that interface then you have a discovery service The the notion of service that's the concept inside of the discovery service It's very simple. It's just some software entity that limits events so that gets registered in the discovery service That service since it emits events Maintains a subscription endpoint and really what the service description here does it just enumerates the types of events That are available for subscription with some further Information and then we have a type collection type collections really for the reverse lookup of which services Are available and this is an interface that can be implemented in one place or it can be implemented in multiple places and it's obviously also allowed to federate those discovery services so you can really create a Catalog of services and you can make those that catalog available everywhere with the same interface you can imagine having a local cash that exists Somewhere near your consumers and makes those available and of course Discovery mechanism will also allow the the catalog to be adjusted to the circumstances that you have You know near your end points. So if it's required to Subscribe via a different subscription manager. We're going to get to that in a second To be able to deliver those events into your into your respective endpoint that you have then that sort of Translation can also be done in that discovery model. It's not expressed. It's not it's not expressed explicitly because the interface is kept very Simple, but the flexibility is there to allow this next Once you have discovered which events are available Then you want to be able to Subscribe to them and again today in cloud events in the base cloud event spec. That's something that we've made a matter of of out of bands Agreements some protocols for instance a MQP or MQTT or Casca already have built-in facilities to subscribe. So if you are designating a subscription manager that is a Cue or sorry or a topic inside of an event message broker that it's implied effectively what that subscription Protocol is if you're using MQP for instance And but for other for HTTP for instance HTTP doesn't have an built-in Subscription notion. So and even though those subscription services are fairly common where these subscription patterns are fairly common with with web hooks It's something that we have not seen Being sufficiently standardized. So we had to find a way to go and Create a specification that acknowledges the existence of these Existing protocols like MQP and MQTT which have built-in subscription notions and then at the same time add a ability for Protocols that don't have that like HTTP to also allow you to do a subscription gesture That's why we wrote the subscription API the subscription API Specification acknowledges those or enumerates the subscription abilities that exist in those other protocols Supported by cloud events and then Explicitly introduces an API which can then be implemented using HTTP or can be implemented using gRPC or implemented any other Protocol that specifically needs this to go and Effectively manage subscriptions and for that we've introduced the notion of a subscription manager next So the subscription manager is the one that implements the subscription API and the subscription manager might act on Behave of itself. So it may really be the the entity that emits those events But it also may may act on behalf of others So you have that very often in Larger setups where you are with very very many producers and those many producers produce events into a Middleware of sorts and then if you are interested in Events from a particular publisher or a group of publishers, then you are subscribing On that middleware on behalf of those producers. So one of the obvious examples here is versus IOT We have sometimes thousands and or hundreds of thousands of devices switching into a cloud end point And if you're interested in specific events For issue emitted by those devices, you would not subscribe to every single device But you really would go and go to the subscription manager, which has the pool of events to go and pull out the events that you need For those subscription for the subscription manager As said we're enumerating the existing mechanisms of existing protocols And we have defined this HTTP API to help with the cases where More API and API abstraction to help with the cases where that is not available We also have two delivery styles And that is the push delivery and the pull delivery. So we're we're distinguishing between those two where Typically for cloud events as we've defined it today Most mostly the delivery is assumed to be push, which means the producer Or the subscription manager acting on its behalf pushes those events by establishing connection and sending the event along or This can also now The definition here allows for pull delivery style where you are effectively having the subscription manager Maintaining a queue for instance on behalf of the producer. So both of those things are possible Next so those were effectively complementing the the the mechanisms that we have today in Cloud events by you know closing the loop. We have delivery something that's defined now and now we have discovery and subscription that we're adding to it a Really important further element this schema registry next Every cloud event can carry a payload with event details. Mostly you form our structured data structured data if you're sending sending it to another party Will require often for that other party to be able to validate whether that structured data is correct based on some syntactic rules That can be expressed in a schema And then there's often also a need for serialization where you want to have an in-memory data structure to be serialized out using an efficient format and those efficient formats often leave the Structural metadata out like you're familiar with with what Jason looks like Jason is very repetitive and Puts all the metadata elements and type information kind of into the document itself And there's a number of far more efficient Serialization formats which don't do that and they keep that information outside in in schema documents But then once you use that the question is where do you put those documents? So the goal of the schema registry is to allow Store these documents and access those documents in a consistent way so that you can go and and build software elements a Serializer and a validator that can then lean on those schemas and on hints that come with the event and then can go and do you serialize that structure data or serialize that structure data and the goal is for that to be a Project neutral and vendor neutral so that that works for Cloud events, but that also works for other messaging and eventing Infrastructures because we often see that things get born as cloud events But then get forwarded through other messaging infrastructures as well and so we don't want to constrain this to the case of cloud events and cloud events also is Just using in the the binary format is just using a message payload like any other Message or eventing use case would so it would it doesn't simply doesn't make sense to constrain the registry Just to cloud events use cases Next so that's one of the principles that we have is at the bottom What I just said that it should be scenario neutral. It should also be protocol neutral So the registry data model is abstractly defined and the HTTP Binding that we have so setting and receiving message Schemas via HTTP. That's well defined right now with an open API documents but the the registry per se the data model is abstract find and We allow it allows protocol bindings and we certainly anticipate to have an MQP binding for this and Hopefully more bindings that somewhat depends because it's a request for the response model Require depends on the capabilities of the respective protocols, but gfc is also certainly in the cards And of course we want to keep this as simple as possible. We don't want to turn this into a massive metadata store with Super powerful capabilities, so there's no Goal here to rival the capabilities of patchy Atlas or something like this But really is like you should be able to implement this registry API over a plain file system or a cloud blob store and It's just there to store those Store those schemas and manage those schemas in the simplest possible way while while providing the core capabilities We need next So this complements the event delivery model that I just talked about By allowing you the producers to Manage and validate or someone on behalf of the producers to manage and validate the schemas And then really think about the data field in the cloud event and how that can be serialized and deserialized And that model here what's in the green works for cloud events as We have to find it but also works for other eventing scenarios as Well, so this is kind of you for you to to get visualized what this is about It's really for serialization deserializing or validation on either side and it really pertains to the data element It sits inside of the cloud event next Finally the structure of the schemes of this schema registry. We've structured this such that there is a notion of groups The group they can group so a schema registry is split up into groups those groups can be by Application or by some other criteria. They're really also there as an anchor for access control So you may want to go and limit access to schemas by groups because they may carry Important secrets so you don't want to know make them accessible to everybody then within that you have schemas which really are containers for sets of schema documents that represent the same data structures and then of course those those Schemas evolve and so the documents are really the leaves of this We have various schema versions starting with a schema version one Where if you're adding fields or if you're making fields obsolete, but you don't remove them Then you're still within the same backwards compatible schema of Generate generations line of generations and that's where you simply add schema versions. We have some rules for how to add and And manage those schemas. It's a very simple structure to manage effectively schema documents next and that's where we are We will take some live questions In at the end of this presentation following now If you want to learn more about cloud events go to cloud events IO Our specification repository is on github on cloud events less spec That's where you also will find the latest versions of all of those things and we also have weekly calls Thursdays at 12 p.m. Eastern time u.s. Or 1800 center European time and in the repo is also the dial in information and Then you can also follow dog and myself on Twitter or send us email if you have any further questions All right, cool. Thank you comments. All right. Thank you everybody will stop recording here and take questions on live. Thank you all