 Hi, and welcome everybody to cloud events looking beyond event delivery. I'm Remy Cachaud and I work for Nixio where I'm responsible for the security and compliance program. Like every security person, I like to collect events from all systems. GitHub for commits, single sign-ons, VPNs, and all those events are always from different formats, and different methods to retrieve them. So while working on this project, I was searching for a group who normalize events, and I found one here in CNCF with the cloud events groups. And even better, they were all nice and welcoming. So what is cloud events? Let's have a closer look. As you will find in our repository, there is a nice squat in the reading. Events are everywhere. However, event producers tend to describe events differently. So cloud events define a common set of metadata for events and where to find those metadata inside the messages. It also define different type of transport like HTTP or Kafka. So you can basically compare cloud events to high-p, while the TCP and UDP is for us the transport like HTTP or Kafka. In this session, we will see cloud events and how it looks in detail, what the community provides to you, and a little bit of history on the project. Then we will focus on interoperability with the discovery, the subscription API, and the schema registry. So let's see what does it looks like? Here on the left, we have the HTTP structured event with everything in blue is basically the context and the metadata defined by cloud events. So we see the spec version as 1.0 as we don't put the patch version in this attribute, the type of events, the source, the ID, which are the the one involved are the four required attributes and then a data content type. And finally inside data, you'll find the correct event sent by the system with all its specific attributes. As we manage different transports, we can also have a look on HTTP binary transport where you will see that all the metadata are basically transferred inside the header of the HTTP request and the payload contains only the content of the event. So let's have a look on the metadata. Four attributes are required and four attributes are optional. In the four required attributes, we have the ID, which is a unique ID, per source. The source is a reference to the system, can have one or several producer behind a source. But if there is several producer as the source plus ID must be unique, that means they need to have a way to ensure the ID is unique between them. The spec version without the patch version, the type of the event are the four required attributes. The optional attributes are the data content type, which is basically the main type of the data, the data schema, which defines the structure of the content, the subject, where you will find whatever you want to put as the event producer and the time. Let's have a look on what it can look like if it was a GitHub event. So in our case, the spec version will be 1.0. The type, if we create a branch, will be com.github.create.branch. The source will be the URL of our repository, in our case, cloudevent.spec. The ID is a unique ID generated by GitHub, in this case, it's just a normal UUID. And the data content type will be application.json. The subject is the commit reference. And finally, in data, you will find all the attributes that you would have found in any GitHub events. So what's in the box? What is provided to you by the community? The SDKs include all serializer and deserializer of the cloud event, and at least the HTTP transport. There is several languages, C-sharp, Go, Java, JavaScript, PHP, Python, Ruby, Rust, and we also manage several transports, like HTTP, MQP, MQTT, Kafka, Nuts, and you will find all the definitions like Avro definitions and others inside our repository. A little bit of history, the project started in February 2018, and the first version in April, the V0.1. In May, we got accepted as a CNCF sandbox project. In December, the V0.2 was released, followed in 2019 by the V0.3 in June, and finally, the 1.0 in October 2019. Since then, we worked on the interoperability specifications, including discovery, subscription, and schema registry. So the interoperability is basically how to consume the events. We already defined what the event looks like and what they contain, but we still need to discover the events that are emitted by our systems. We need a way to be able to subscribe to those events and then ideally to have one place where to publish the schema that defines our events. So what is the discovery API? The discovery API allows you to understand who produce the event of interest and what type of events they produce. Also, what different subscriptions they do and how you will be able to subscribe to those events. So you can dynamically query the producer and the source to understand the events emitted. So why? Because there was a lack of standardization on that. There is like webbooks and other, but each provider is basically implementing its own methodology. So you always find yourself digging into their documentation and for each new system you have to dig into the documentation. The discovery API of cloud events aim to solve this by having one way of questioning the different events and get the result on how to subscribe to them. So no more documentation browsing where you spend hours and hours because each system are clearly different. The deliverable as the specification group or the discovery API specification which is basically open API and the HTTP slash JSON mapping to understand how to subscribe. As we discussed in the discovery API, the logical following is a subscription API. Once you discover which events you want to subscribe, you need to be able to subscribe to it. So the subscription API is basically an open API definition on how to manage subscription to events. In this subscription object, as we see on the right side, you can define the different types that you want to subscribe to and you can also filter through different dialects that are defined in the norms. Then the sync is basically where you want the events to be pushed and with which protocol and the settings that go along with it. It basically enables the automation of the subscription. So combined with the discovery API, you can completely automate the discovery and subscription to an event and have more automated systems. Finally, the schema registry. So as we see here in a sample of an event, there is a data schema. The data schema basically defines the structure of what's inside the data. But that means we will probably need to be able to publish schemas in different formats and different versions. So the schema registry basically aims to define how you consume and publish schema publicly or privately through a schema registry. Finally, let's go back to the interoperability. With what we saw, we can definitely have different types of topologies. One, pretty simple. Basically, the consumer is sitting next to the producer and the producer exposes both discovery and subscription API. And then the consumer just uses those APIs and retrieves the cloud events through the normal network. Then there is the aggregator one. So we can have a system that will aggregate different cloud events from different producers and will expose the same discovery API and subscription API. So as a consumer, you can just go through your aggregator and retrieve any events known by this aggregator. That's a simple use case of an aggregator. You can still sit on all the same network. The more complex one is the aggregator chain. So in that case, we can have aggregator two who basically know two producers and expose a subscription and a discovery API. The aggregator one can just use the aggregator two and even do some change in the transport, going from HTTP to Kafka. And also expose one subscription and one discovery API. So the consumer can talk to the aggregator one and see everything that is below from the aggregator one. And it can be even nicer if we consider that the aggregator two is in fact in company B and the producer one and two as well. While in the company A, we have one aggregator that link automatically all the events from the company B. So you can expose to your internal consumer of company A some events from the company B without them even having to know about how the company B is managed or even the connection between aggregator one and aggregator two. So this is a little bit more complex, but I think it's a good proof of the power behind cloud events and normalization as it has a high potential to simplify communication between systems and companies with a common way of subscribing to events. So this is it for our global overview of what is cloud events. We meet every first day at 6 p.m. European time, 12 p.m. New York time and 9 a.m. California time. I'm in California so on the 9 a.m. is important for me. You can find more information on our website and the spec repository is with the link below. Don't hesitate at all to join. Everybody is really nice and welcoming. So I've been with the group for a year and I really enjoy all the talks and the knowledge that these all groups share. So thank you very much for having me listening until the end and don't hesitate to ask any question. Thank you.