 All right, hello and welcome. I hope you guys have a great conference day so far. With me here on stage is Pirangelo Di Pilato. My name is Matthias Wessenhoff. We both work at Reddit and we are actually maintainers and contributors to Knative eventing, as well as Cloud events. And you will learn about both in this talk. The title is declarative event-driven application patterns with Knative eventing. And now let's see what we have on the agenda today. So first we do a little bit of an introduction of EDA and how that actually relates to Kubernetes or what are the options there. Then the biggest part of this presentation is that you will learn about some practical patterns, well-established EDA patterns implemented with Knative eventing, and also making usage of the CNCF Cloud events standard that is part of Knative eventing. And then at the end, hopefully, we will have some time for question and answers. OK, first let's take a look at the basics EDA, forgetting Kubernetes for a moment. So EDA event-driven architectures are basically a well-established thing. It's a paradigm that you find very often in solutions that cater microservices. So with events, you can then trigger a microservice or your service application. And it will then either spin up in a service kind of ecosystem whenever it's invoked or just runs there. And usually EDA systems, they come with a set of typical building blocks. The first one is the event producers, those basically generate the events. Like in our example here on the slide deck, we have access to Kafka, MongoDB, or whatsoever. So now from the event producer perspective, there is another interesting component that's called the event router in the Substruct EDA principles. So the event router is basically used to ingest the events into the system. And to some degree, all EDA-style components have some sort of filtering for routing. Like when a certain filter is matching, it is then actually forwarding all of the events to their particular consumers. So and the consumers are, as the name implies, they receive the events and consume them. And you can basically leverage the incoming event payload and process that and do even work with return another event. And then you could basically trigger other microservices being involved. And the whole benefit of EDA-style applications is that you have definitely a loose coupling of your microservices. You can focus on an interoperability system. Like you have one team that owns one service and another one is a different language provided owned by a different team. So they are loosely coupled there. So now let's take a look at Knative Eventing. Knative Eventing actually is a Kubernetes native infrastructure piece that actually ships a lot of these building blocks that are from the established EDA patterns that is all provided there by you. Under the covers, Knative Eventing uses the cloud event standard as the format to exchange events between the components. Are you guys familiar with the CNCF cloud events specification? Just a handful. OK, so the benefit about the cloud event is it is not really a new API. For instance, talking about Kafka or HTTP, the body of the cloud event is actually the payload. And cloud event, in addition, introduces metadata, like specific headers that are defined. So cloud events on their own are interoperable and backwards compatible. So if you have an HTTP cloud event, it's just some extra HTTP headers on your request, same with Kafka. The specification, Kafka records the data, and then you have Kafka headers as the representation of this metadata. So we use cloud events behind the scenes. Our event producers are called sources. Those are declarative CRDs. We also have the option that you can run your own container. And at the end of the day, these are adapters into third party systems. And then they read the native format, and they then ingest it to the broker API that we have. So the broker access the router for events and has a trigger API. So the trigger is doing the match. Like you define criteria on cloud event meta attributes or metadata that you have. Like I want to only process an event that comes from source X or Y or Z. And whenever the trigger is being applied and matching, it will go to the things. And the things, any application that works as a standard deployment with a regular Kubernetes service works fine here. We also have building blocks in Knative eventing that offers like pre-configurable things. For instance, we have a Kafka thing. So the Kafka thing, when you use that, the incoming HTTP request cloud event is then persisted as a Kafka native record with the proper header sections there, as I mentioned before, in a pre-configured topic that you can declaratively set inside of the manifest for the Kafka thing. Now we will learn a little bit of patterns. All right, so this is the list that we wanna show you. Some are simpler and we wanna provide some insights into them and some are more advanced. And hopefully you will learn something new. So the first one is like that let's sync and retries. This is a pattern that you usually use for when a consumer could fail and usually is always the answer. And you wanna provide also an additional consumer that is gonna receive an event when the original consumer fails. And the important part is that you need to think about how many retries we are gonna, to configure the system in a way that we retry the request that failed to be sent to the consumer and the particular configuration and numbers are based of course on the service that is receiving events. And that will long, for example, that service can stay down if something fails. In this case, we have the retry number, the back off policy and back off delay and as well as that little sync which is a reference to in this case another broker but you can reference any sort of other sync or event consumer that Matias was showing earlier. What's nice about the eventing also is that when the request fails to be consumed by a consumer, we also add additional cloud events adders to the event itself which is in this case kinetic error code, kinetic error data and kinetic error destination so that eventually you can basically get the reason why that event was not received by the consumer and hopefully debug the application or change the configuration. In this case, we use kinetic error data is a base 64 value because the cloud events specification basically can only contain as a value a subset of characters so basically it's for works in this case and that's it for the first pattern so we can check out the next one. Thank you. So the next one is a very old pattern. It's actually first time mentioned in the book from Gigahopy Enterprise Integration Patterns that's like almost 21 years old book and yeah, the benefit of this pattern is actually avoiding large payloads which in the ecosystem of the cloud means also you will reduce your savings. The nice party is actually as the link underneath provides this pattern is actually implemented and specified with the cloud event specification itself so what that means, you see the attribute data ref that's an optional extension attribute that you can have and you point a URL, in this case it's a web or HTTP URL you can have also like things that mean have contractual meaning within your own organization. The benefit really here also is that if you just do not ship around the data you may save some money but you can also have some sort of like fine grained access control because you ensure when you only send a reference the consumer has to actually get access to the system and read the data so this is kind of a protection mechanism built into the pattern there as well. It's not just for cost savings but every consumer kind of downloads the material the data behind the firewall so to say and only can access the data when he really has permission to do so. This is a code snippet for a producer that sets the reference. So today every library that's cloud event compliant should have some set extension where you give the name of the extension and then the value so it will end up in a HTTP header for instance or in a Kafka header wherever you use. In Java cloud event SDK we have a setter for this data ref already and for the Golang SDK there is a proposed API that says like add data ref extension. The real difference is here you omit the actual name of the attribute so that's done for you by the library and the consumption on the other side is the same like you basically either get like the value of that header for the data ref or you have some gather mechanism for the data ref extension attribute. Next one, even transformation. For this one basically the idea is that we want to use case for this part and it's usually when we want to provide some form of a different form of the event, the original event and transform it into a different form. For example in this case we have an order event in this case is like using the V1 schema and we want to transform it to a V2 schema and this is what the cloud events in JSON looks like. For with eventing, we can specify trigger, filter on for example events with the V1 data schema and point to a function and the function looks like this one. You pretty much get an event, you transform it to a V2 and set the data schema to V2 as well and you return it. That's all you need to do and we handle the rest basically and combined we'd also retries, we also end our delivery failures. Okay, that's the next one, sequence. Yeah, the next pattern sequence is actually a declarative component that we have in Knative eventing as a provided type. It kind of is based on the enterprise integration patterns like for the orchestration and choreography. It also may be that some of you are familiar with AWS step functions. What it really allows you is the sequence allows you like an in-order execution of a number of services that you are doing. So on the right-hand side here we have some overview of some function or service for ordering and delivery. First of all, the sequence is like the umbrella here. When it receives the event, it gives it to the order service. The dead letter sync and delivery guarantees that Pirangelo mentioned in the first pattern they are also applying here. If for instance the event could not be delivered to that order service, you can configure fallback mechanisms like a dead letter sync basically here. So when that is all cool and the event goes back to the sequence, the orchestration process then goes and it gives it to the next service, in this case the delivery one here. On this slide, you see the manifest, the YAML for this one and I made a particular difference here. The first reference is the first step. So they are in order executed. You see here the API version. I have mentioned here serving Knative v1 so you can use Knative serving CRDs there. However, Knative eventing, I mentioned that in the beginning, is not just required to run with Knative serving, it just works with any kind of vanilla standard Kubernetes v1 services. So it can be your regular application that's here. And then finally the reply is basically the final consumer of the whole chain of executed functions or services here. The nice thing here is as well, you can also not just reference yet any other service here, you can also send the event back to another sequence. So you can have like from one sequence to another to another, et cetera. That's already up to you. So here's a little demo. So we have one source that is generating an event is a regular text message says hello Paris and that uses the HGP call to the URL of the sequence. So the sequence CRD, similar to the previous one where you saw the manifest has two functions in order execution and eventually it goes to a sync. So the first function that's being seen here is a simple event transformer similar to what Pianzhu showed before. We see what we received. We receive the message saying hello Paris. And it does a transformation, it depends something there. In the second function you see whatever the first one was producing is received here and it depends another one, like handled by the second. And then the final just gets the entire cloud event because it was just forwarded there. So you get the transform message here from the sync. The last one, this probably needs some talk but we are gonna try our best. The Outbox pattern is a pattern that is usually at least it should be used for when a service usually is writing some records on a database and then eventually sends an event to a broker or something else. Usually it's very useful when you have multiple steps and you talk with different systems that are like outside your control. And what's kinda clear from this diagram is these two operations are not transactional and therefore if for example send fails for some reason the record in the database is still written to it. So, and this of course here we have only two operations but it can be arbitrary number of operations. And so this pattern is like introducing different database design in a way. We have two tables, for example in this case we have order table and outbox table. And in this case what's the solution is to in a transactional way writing both to the order table and to the outbox table. And attaching change data capture system, one very popular is debisium. And debisium is also integrated well with kinetic eventing. You can just use a, you can point to a broker and it's gonna send events from your database to a broker. And also it's ensuring that you have the ordering of events is like the correct one as the service is writing them and also simplifies what is called event sourcing on the consumer side. Hopefully this kind of give you an overview and if you want to learn more about kinetic eventing, cloud events or anything given to an application or building them, yeah. We are gonna stay by the kinetic kiosk or red art boot and these are a few resources, kinetic website and a few blog posts as well as GitHub repos. And yeah, if you have any questions please ask them. One note, I saw a bunch of you were taking pictures from the diagrams and patterns. The slides are already uploaded. However, we did some tweaks before this. After this, we will update it but it's already like roughly there. So you can already find it on the SCAT website whatsoever. There's the first question. Thank you. One question I have in general on these patterns. The idea is far and forget. So anybody just admits this event and something will happen. That's probably the idea. But how do you suggest to use one, for example, the sender since a malformed event, for example, we spent a scheme on the payload and the schema is not correct. It's missing fields. How is supposed the sender to at least know something went wrong? So in theory, the question was with the mic, it was understood. So in theory, the cloud event itself also has a reference for a data schema. So if you send in proper data, the receiver endpoint should, in theory, do the validation error. However, that's currently not yet there but that's something that would be the theory for this. Yeah, it would receive a return code error then. Like it is HTTP, the Canadian event. Yeah, when he accepted and sends the wrong message, it would receive back the error code from the HTTP senders. It's the HTTP applications at the end of the day. When you send an event, you send an HTTP request against the Canadian principles. The vocal would not accept the request if the schema... It should be, yes. Yeah. Another question. Around the schema validation itself, is that actually coming in? Because we've been looking at using cloud events and what have you. And schema validation off the payload itself is kind of one of the biggest headaches but challenges that we've still got. That's right. And the schema validation, so to say, is currently also not really available in that sense but it's a general theory. So this is currently not possible. But that is something that we have in our agenda that we want to do. You can end up also in a very nightmare situation when you have to do all of this one. Like you would potentially also need a schema registry, et cetera. Yeah, yeah. Exactly. Yeah. One more. Is the broker available if you've got VMs outside of Kubernetes? We've got kind of a hybrid approach. We've got legacy or monoliths and we've got microservices. Can we get access to the broker? Yeah, the broker in the cluster is an instance that has a reachable HTTP URL. So you can also expose that. For instance, if you run on OpenShift or something like that, you have a mechanism that the service, there's a service like a Kubernetes service is there and you have a URL pass that points to the name of the broker with that URL and you can definitely expose your service to the outbound world. So you can install a Canadian broker implementation on your cluster exposed just at one endpoint and you can do a curl from your machine against it with a public URL. So that just works fine. Yes. Thank you. Hi. May I inquire that for the observability tool for error-handling application, is there any different inside this pattern or all the same? Depends on the depend. Sorry, what was the question? For the observability tool like a telemetry or log for error-handling in this pattern. Yeah. What's the question? What can you do for error-handling? Such like the message broker, the break or the latency is too heavy and we need to improve. You mean when the broker itself has problems in processing that one? It would return to a not ready state in the Kubernetes. You would basically, a broker is a CRD. It's basically every broker that you have in the system is a CR. So it is constantly reconciled and you would see the state of the broker if that's ready or not. What we were showing here is more application specifics. Like when you send something and the broker is not able to redistribute it, it would eventually send it to a dead letter thing and it would, in addition, add extra metadata what Pietrangelo showed. So then in the dead letter thing, when you fetch events from the dead letter thing, you have these extra headers and you can see why it was not delivered. But if your broker, for instance, has problems because the Kafka is going down, the broker type as a CR basically is no longer in the ready state. It would reconcile to false and you would see that as a normal, it's like any other CRD that you have in Kubernetes. If they are no longer ready, you have definitely mechanics to see this is something wrong there, yes. But that's more an operational aspect, not just the application specifics. Okay, cool. There's one more in the back. Okay, so my question is regarding maybe not exactly about the patterns, but when I see flows of the data and the sequences, I wonder how native eventing can help me visualize or monitor the flow of the data to have a high overview of what is the data, how is the data is flowing through the system and where are the breaking points. You want to say Pietrangelo? Yeah, he has very insights on tracing and monitoring. I give the mic to him. So in terms of visualization, what we provide is mostly, every time you opt through broker sequence or channel, we of course support tracing spans and what we suggest for visualization is using basically sending all these tracing span to any sort of integration that supports Zipkin format, usually OpenTelemetry or Yeager, for example, and visualize there the flow in the system. That's, I think, the easiest. But of course, you can also technically use the... We haven't really built that, but we don't have a UI, but technically, for example, in OpenShift, we have that part where we visualize these CRs into a topology view pretty much, but it's not in the Kubernetes space because technically, as a community, we don't have expertise into UI development, so maybe join us. Any more questions? Okay, thank you so much. And the Canadian booth is open tomorrow morning, 9 o'clock, something like that, 10 o'clock, so a bunch of us will be hanging out there through the course of the week. So show up there, ask more questions. Thanks again for attending.