 Okay, we're going to get started. Hello everyone, thank you so much for joining us for today's CNCF webinar, Event-Driven Architecture with Knative Events. I'm Jerry Fallon and I will be moderating today's webinar and we would like to welcome our presenters today, Nicholas Lopez, Senior Software Engineer at Google and Brian Zimmerman, Product Manager at Google. We just have a few housekeeping items before we get started. During the webinar, you were not able to talk as an attendee. There is a Q&A box at the bottom of your screen. So please feel free to drop your questions in there and we will get to as many as we can at the end. This is an official webinar of the CNCF and as such is subject to the CNCF Code of Conduct. So please do not add anything to the chat or questions that are in violation of the Code of Conduct. Please be respectful of your fellow participants and presenters and please also note that the recording and slides will be posted later today to the CNCF webinar page at cncf.io slash webinars. And with that, I will hand it over to Nicholas and Brian for today's presentation. Awesome, thank you very much for the great introduction. My name is Brian Zimmerman and I'm a Product Manager here at Google. I focus on the serverless part of our offering and specifically on our event-based products. Hi, everyone, I'm Nick Lopez. I'm a Senior Software Engineer here at Google and I work with a team that builds Keynote of Eventing and other related products. Great, so today we're gonna be talking about Keynote of Eventing. We'll start out with some basic concepts. And first of all, we need to talk about the history of how we got to this place when it comes to the need for event-driven architecture. And so we'll start talking about the rise of microservices, then we'll talk about the advantages and the place that event-driven architecture has in customer ecosystems. Then we'll introduce Keynote of Eventing, core concepts and its components. Finally, we'll have a demo that actually goes through this in action. And then we'll talk about the vibrant community that is behind Keynative and how you can get involved. So to start, we'll talk about the rise of microservices. So going back in time, most applications were always built as a monolith. And when I say monolith in this case, I'm referring to an application where there's that single application layer that contains everything integrated into that one platform, everything required for the application. So take an example of an e-commerce store. All of my services, whether they be processing payments, serving up the website, managing my customers, sending recommendations, all of that is done by that one piece of code, or large piece of code, and not separated into individual services. There are some inherent issues in this particular pattern. Most notably, this can scale in only one direction. That's vertically. You can create bigger machines with a little work. You can add more machines, but you can't scale out individual components of the application. Secondly, it can be very overwhelming to build, deploy, and maintain in terms of the application as it grows. You could imagine as you add more and more features, functionality, and teams to your application or your application ecosystem, things can become unwieldy in terms of impact from one system to another. This makes deployment, management, and coordination very, very difficult over time. And similarly, it can become a nightmare to change anything. Any change that one team makes by virtue of it being part of that same system could affect the whole or any of the other related teams. So this can become a nightmare to manage and mitigate the risk of change. So the solution to this is microservices, which I'm sure everybody here is familiar with. And talking about the migration path, people typically don't flip a switch and go from a monolith to microservices. They typically will migrate in a staged way, starting by separating out a couple key components that may benefit most into their own services, attach the core monolith. This will happen over time, bit by bit, until you reach the end state, where you have all of your services in a microservice architecture. In its final look, this appears to be a spider web of complex interactions among many microservices. This provides separation of concerns with loosely coupled services that promotes agility, makes it easier to maintain, and perhaps most importantly, this creates the ability to scale not just vertically, but also horizontally. That way you can meet the demand and need to ensure the best customer experience, however that demand pattern appears. Services can be scaled independently to meet that demand, without impact to the rest of the application. So this is a massively improved pattern, but there are some inherent challenges. First of all, as I mentioned that spider web of point-to-point interactions can be very complex. Despite the loose coupling, changing anyone's service can have significant upstream or downstream impact on other services. You could see that adding or removing services actually would require changing the other services that either expect input from or are sending input to the downstream services. If your teams operate in a true walled garden distributed model, predicting impact may be very difficult and require a lot of coordination, which can be complex. So what's the answer to the problems that microservices create? Don't get me wrong, microservices is a great application pattern that solves all of the problems we talked about when it comes to monoliths and even more. Event-driven architecture is an attempt to try and solve some of those issues with complex spider web microservice applications. To understand this, first let's talk about what's an event. So here's some key concepts to get there. First, we start with an occurrence. Occurrence is something that is incurred within your application or environment that could warrant an action. The event is simply a statement of fact. It contains information about the occurrence as well as context such as where the occurrence resided. Because the event represents fact, it does not need to include any information about the destination. The producer thus has no expectation of how the event is to be consumed. This kind of pattern is key to the power of this type of architecture. The event then triggers an action and we'll describe how this works later, specifically with Knative Eventing. In simple terms, building in an event-driven way ensures that you are reacting to the facts in your environment or facts about your application rather than having to construct every point-to-point interaction as needed. Let's consider an example. A new user is added to my e-commerce store. So in a microservice application that I would maybe have a new user service or something responsible for that, it would then have to write to the database, once complete, signal to the email service that would send the welcome email and likely a host of other onboarding activities that would be required when that new user is added to the system. Now let's consider I've entered into a new market and I need to check for compliance against embargoed countries or user lists. This is an incremental addition to my application and in a microservice architecture would require for you to change that existing application. I would need to update the new user service, potentially other services that depend on the completion of that compliance and then build the compliance service as well and then update them all together because there is the requirement of interaction so I need to manage that deployment. You can imagine how this is very complex. In an event-driven pattern, I only need to build my new service and respond to the new user event. This makes build and deployment much simpler and I don't have to manage the concerns of the other services. We'll talk in a moment about how this works specifically with Knative Eventing. So in an event-driven pattern, there are three key concepts, producers, the intermediary and the consumers. Your services don't communicate directly to each other. Instead, they communicate to an event intermediary and we'll discuss what that looks like in Knative Eventing specifically. That intermediary will then communicate directly with the consumer applications. In some cases, there could be several layers of intermediaries but in general, for our purposes, we'll talk about just one layer. We'll discuss this in a moment but it should be noted that the producer does not need to be a service within your application that you create. Instead, it could be a source that you input to react to the occurrence and deliver the corresponding event to the intermediary. So a service running in your Kubernetes environment can either be a producer or a consumer or both actually. In fact, for this pattern to replace the spiderweb of concerns you saw earlier, services may likely have to be both producers and consumers. What are the advantages of this type of system? So this creates a fully decoupled architecture. It's no longer required to update upstream or downstream services as they are now fully decoupled having true separation of concern. This also leads to high scalability and perhaps most importantly, it can extend organically. We discussed in the earlier example of having to add that new compliant service. In this type of pattern, we wouldn't have to affect any of the other components of our application. In a microservices pattern, you of course would have to. So in a lot of ways, when you are intending to extend your application, which most of us are, this model offers the superior ability to do that organically without having to worry about what it could break in the past. So let's see how Knative can fit into this. Knative eventing is our intermediary of choice. It is a set of composable primitives to enable late binding of producers and consumers. For an intermediary to work, there needs to be some standardization, otherwise you actually still have to build in a shared understanding into your code. For example, what the messages will look like, how they'll be delivered. To address this, Knative utilizes the cloud events format as the message envelope. This is of course a CNCF project. The producers in this case are referred to as Knative sources. A source could be one of your services by means of raising what we refer to as a custom event delivered directly to the intermediary. However, in Knative, there is also a vibrant community of sources that allow you to react to facts within or outside your application without the need to directly communicate or write them yourselves. For example, there exists today a Knative source for GitHub, Kafka source, and many more. The consumer is any service that can receive the event. And the broker represents the event mesh. Events are sent to the broker and then sent along to any specific consumer. The trigger is the entity that defines the subscription to a particular event by said consumer and thus directs the filtering appropriately. Another key concept to talk about is event source registry and event registry. This is a key component to developer experience. If every developer has to learn from square one what events are available, what sources are available, it makes it a lot more difficult. So there exists a pattern of an event and source registry where you can learn what can be added and what events can be reacted upon. There are more primitives to consider such as sequences, channels and flows which we'll discuss later on today. Okay, so without further ado, that's enough talking. Let's see this in action. Next, I'm gonna hand things over to Nick who will show you how this works in real life showing you actual application of the concepts we talked about. Then afterwards, we're gonna deep dive into some of the things you saw to discuss the concepts in more detail. Over to you Nick. Thanks, Brian. So what I'm gonna be showing you today it's an image processing pipeline application. We're gonna be seeing some of the concepts that Ryan just introduced a few slides ago and we're gonna actually see how we can connect the services by a creative event. So what we're looking at here is a setup in which we have a GKE cluster, Kubernetes Engine cluster running. It's an image processing pipeline as I was mentioning. So the purpose of this pipeline is that users on the left hand side, they can drop images on what we call here the images input, this cloud storage bucket. And what we will want to do is to connect this via a source that we will be looking in a minute in detail which will trigger an event that will be received by our filter service. Our filter service will be in charge of filtering any images that we don't wanna process in our system if they have content that is undesired and it will be using the vision API to do this. After this, the filter will be producing an event as well. It will be a custom event which will be sent to the broker once again. The resizer on the label or service will be receiving that event and they will be in charge of changing the size of the image and identifying tags for the image respectively. And finally, we have a watermarker service in the middle which will be receiving an event produced by the resizer service. And this service will be adding a watermark to our images. And all of the resizer, watermarker and label services will be dropping the results of their processing into a cloud storage bucket as well which we have called here the images output and which would be consumed by end users of this image processing pipeline. So we already have the setup for some of these pieces in our current running cluster. What we will be doing specifically during our demo is creating all of these arrows that you see here. Specifically, we're gonna be creating the arrows to connect via advanced images inputs with services in our GKE cluster and to connect the services in this GKE cluster. So we'll be setting up all of these in the next few slides. The first one that we're gonna be setting specifically is a cloud storage source. So what this one will be defining and we'll look at exactly what it looks like in a YAML file in a little bit in the demo. This cloud storage source, it will describe that when and over when an object gets stored in our input bucket, an event will be sent to our cluster, to our broker and our cluster. After that, we'll be setting up a trigger. The trigger will allow us to route the event from the broker to the filter service. And right after that, we're gonna be setting up some custom triggers for the events that are produced by these services. The triggers will be configured so that there are a resizer, a label, and watermarker services consume the events that they're interested in consuming. Okay, so let's go ahead and jump to my screen, Brian, if you don't mind, and I'll show this in action. Okay, can you guys see my terminal? Yes, thank you. So the first thing we're gonna check is we have our cluster running and what we're gonna do is we're gonna check that we have our eventing pots running. So the bringing up of the cluster and the starting up of eventing takes a couple of minutes. So we did that before the demo so that we make this a little bit faster. And what we see here is that we have some controller and work webhook running in a couple of namespaces corresponding to our services for creative eventing. And we're also gonna be listing the services that we're gonna be connecting via our example. So we have the filter, the labeler, the resizer on the watermarker. These are key native services. We could list them with that gcon command, but we could also list them with our kubectl command. Additionally, what we have is a couple of buckets where we're gonna be leaving our inputs images, which we call the image input bucket and where we're gonna leaving our output images where the services are gonna be dropping the output images. So these buckets are currently empty. And the first thing that we're gonna do now is bring up our broker. So we're gonna use a gcloud command to bring up our broker, but behind this gcloud command is a simple YAML that is just bringing up a default broker. And here we're just checking out the status bar broker. It's ready to be used. Next thing that we're gonna do is we're gonna be creating our cloud storage source, the first arrow that we saw in the slides a little bit ago. So we're gonna be just showing the contents of that file. It's a simple YAML for a cloud storage source type of resource. It's back, it's pointing to a bucket, which is our input bucket. And it's pointing to sync to our broker that we just created a couple of seconds ago. Our next YAML that we're gonna be applying is for the filter trigger, the first trigger that I showed you. So again, this is a resource here for a trigger. We see that it has a filter on a specific type of events, which is a cloud storage object finalized. And the subscriber for that type of event is the filter service. So the cloud storage source that we just created a little bit ago, we'll be creating events of this type, which will be then routed by the broker to our service filter. Now we're gonna be applying another trigger for the labeler. This is the service that's in charge of receiving the image that has been filtered. So it is actually receiving the filter here is for that specific type of events that our filter server is gonna be producing. And it's pointing to the subscriber being the service labeler. We're gonna be applying a very similar trigger for a resizer. So it has the same type of filter in that it filters the same type of events while uploaded. But in this case, our subscriber is our resizer service. So here we can see how with the same type of events produced by the filter service, we can have two other services downstream that are gonna be connected without having any of the knowledge of the other one. And the last, let me just go ahead and clear this for a second. Our last trigger that we're gonna be creating is for the watermarker service. Apologies for the background noise. So this is our last trigger. This one is gonna be filtering of events of a type file resized. So this corresponds to events produced by our resizing service. And our subscriber in this case is our watermarker service. Okay. So at this point, we have all of our services running and we have connected them effectively with our storage source and our triggers. So we're gonna run a kubectl command to get our storage sources and another one to get our triggers just to make sure that everything is running as expected. So we see our storage source here and it's ready. And we see our triggers that have been set up for our four services that are also ready and running. Okay. So now this should be ready to actually execute our image processing pipelines. So at this point, what we're gonna be doing is we're gonna be copying a file that I have here prepared. Let's go ahead and copy a couple of files to our image inputs bucket. And at this point, our pipeline should be processing. So one of the features that Knative eventing has is that it also produces traces. So traces can be consumed via SIPKIN or via Stackdriver. What we're seeing here is what the traces look like in Stackdriver. Let me go ahead and refresh this window to see if we can see the traces for the events that just occurred a little bit ago for these two images that we uploaded to our buckets. Okay. So here are two dots that correspond to our events that we just sent. We can go ahead and check one of these is out. Let's go ahead and expand it. And we can see here the trace of our event as it passed through the broker as it then went to our filter service. As then it went through our trigger for the resizer and then it went through our trigger for a water marker and it was successfully processed. We can check details of all the time that interface as well. I'm not gonna go into many more details here. Let's go ahead and check out first the output of this processing, which should be ready. So I'm just listing out the files in our image output buckets. And we see that we had three files produced for the Paris file that I uploaded and three more files for the River file that I uploaded. Corresponding to the resize file, the file with the labels, which is the text file and our watermark file. So we can also go ahead and check out here for instance, one of these label files, which got produced. So it has all the labels identified from that image. And we can also check out in cloud storage what that resized and watermark file look like. So this is the result of our file being resized and watermarked. I also have another tab here with what that initial file looked like. Okay, this object is a little bit large but we can have a preview here. So the image is actually larger than what it's showing us here. So this was our file that got processed and got all those labels and resized and watermarked. Okay, let me just skip here to this last, just to recap for a second before I turn it back to Brian, what we saw here is this image processing pipeline. We had a Google Kubernetes engine running with some services, some Knative services are already running. All of these services were disconnected. We had pre-created our storage buckets but these buckets did not have any effect when any action occurred there. And we created a storage source and a series of triggers to connect the services and we actually saw this processing happen. And we also saw for a couple of minutes there what the tracing of one of those events looks like in Stackdriver. Cool, so I think with that I'm gonna stop sharing my screen. I'm gonna turn it back to you, Brian. Thanks Nick, that was a great demo. Thank you. All right, so let's go over at this point a few concepts that you saw to a deeper level. But first of all, let's just review what you saw here today. So in that demo, you saw how you can use Knative to set up an event driven architecture in less than 15 minutes. What you saw here was an application with no interest service communication where we set up the triggers in real time before your eyes. And the result was an application that was fully decoupled with true separation of concern, observability was easy to set up and it was easy to understand what was happening with your application as you saw in the tracing as an example. You'll notice that we didn't have to update a single line of code or wire up the application during this process. Obviously your consumer applications do have to be coded to consume the CloudEvent format but luckily with the CloudEvent SDK and libraries that's also easy to do and fairly standardized. Now all of the configuration you saw, it wasn't in your code, it was other than the consumer that is, it was all done via configuring the intermediary. So next we're gonna talk about some Knative core concepts that you saw today. The first and most notable is the source. So the source is a component that generates or imports events from external sources. The main purpose of that source is to produce the event in CloudEvent format. There's lots of different types of sources that come in different packages. What they all have in common is that they take care of generating the event so the consumer is fully decoupled from the producer. Sources are the components that observes and reacts to the fact in the environment or application and delivers the appropriate event to the broker. In GCP for example, and this is what you saw in the next demo, there are many built-in sources related to activity that happens in Google Cloud, such as the source related to the storage bucket. This is an example of a vendor-specific source and you didn't have to figure out how to understand and observe the occurrence that source was built in for you in that vendor-specific way. There's also a number of community sources such as GitHub, Kafka, API server source that responds to Kubernetes activity and much more. And what's really exciting here is as the community grows and that list of sources improves, we all benefit from the rapidly expanding way of generating consistent and actionable events. We'll talk later about how to get involved with the community, but producing your own sources is a really good way to do that, especially if the problem you're solving is likely to be solved by others. If we all work together and produce that rich catalog of sources, everybody benefits. So let's talk about how the sources are implemented. Knative is designed to be as native to Kubernetes as possible, hence the name Knative. In that, we use the standard way of extending Kubernetes, the custom resource definition or CRD. CRDs define new resource types similar to classes and a programming language. These are then instantiated to create custom objects or COs. This is similar to an instance in a programming language. Broadly speaking, there's two high-level types of sources. There's pushed-based and pull-based. Push-based sources are where an upstream producer pushes an event directly into an address that must be exposed to the producer. So for example, if you have a public URL exposed and GitHub was to push events into that particular pushed-based source, that source would then convert it to Cloud Events Format, deliver it to the appropriate broker whose trigger would then deliver it to the appropriate consumer. These are really easy to scale. They can leverage Knative serving for scale up, scale down. But the disadvantage, of course, is that you have to expose a public URL. And that's not the only type of source because for a lot of enterprises, that's not gonna be possible. Pull-based sources is the solution for that. So in a pull-based source, there is something always running to pull for changes and there's no need to have network access into the source. You just have to have network access out from the source to the producer. So no endpoint needs to be exposed publicly at all. There are some complexities as far as scaling that is handled depending on the implementation that you're using, but these basically allow you to keep your private networks private and they take care of the pulling and then, of course, delivering to the broker and then ultimately the consumer. The next key component we wanna talk about is the broker and the trigger. So these two pieces together make up the event intermediary you saw on previous slides. You can treat the broker as a black box that you throw events into. And you can think of the trigger as the object that specifies when an event is delivered to the individual consumer, a subscription, so to speak. The event sync can be any addressable, Knative Service endpoint, et cetera. In our example, we were exclusively using Knative Services from Knative Serving. That's the least amount of friction to get this all started. In fact, serving and eventing work really, really well together. There's a default broker. However, there are alternate implementations such as the GCB broker and the Knative broker, which may be more useful depending on your environment. And let's go through a couple of those examples. So here is a brief flow of how things go through the Kafka broker as an example. So in this case, there's the publisher which will then send the request into the multi-tenant ingress. From there, there are individual Kafka topics created for each of these individual events. Those are sent to the multi-tenant dispatcher that subscribes those topics, filters them appropriately and sends them via the configured trigger to the appropriate consumer. GCB broker as an example works very similarly. The notable differences here are that the PubSub topics, sorry, the topics are PubSub topics that handle the messaging rather than Kafka topics, but ultimately it's very similar. The other fundamental difference here is what happens in the error case. When things aren't able to be delivered, they're sent to a failure topic and then there is a retry, multi-tenant retry service that will watch for those and ensure that they're re-delivered to the consumer so that you have at least once delivery. A couple other pieces to talk about, I mentioned this earlier, there are additional primitives that are interesting to understand for those looking to extend this model into more complex use cases. Those are, there's messaging, there are flows and then there are channels. So, or sorry, sequences. So messaging, I mentioned earlier that the channel, that's an abstraction of a message transport that takes care of things like message persistence. The subscription allows listening to messages to a particular channel, which allows message delivery. Many ways to implement this, you saw the example of Kafka and GCP using PubSub, there's also ASQS, NAT, et cetera. Flows is another interesting concept. These allow sequences of events, either in sequence or parallel, series or parallel rather. For an example, in a sequenced event, you could actually have a single event that represents the entire job, yet multiple services may have different roles in that particular job. They can act on the event and mutate its details to be passed down to the next service, allowing you to have that complete unit of work. For those who are GCP users, you can utilize Knative by means of events for Cloud Run for Anthos that has recently been released to beta. This product builds on top of the Knative primitives that you saw today to offer an experience but tailored to GCP. In particular, there's a large number of Google provided sources and easy setup. So if you're interested in trying out Knative and you're on GCP, give it a shot. You can find out our quick start under the documentation at cloud.google.com and we have a link to it later on in the presentation. So last thing to mention is about the Knative community. There is a vibrant community of developers across the globe that are supporting Knative. We talked about eventing today, however, Knative also contains serving. And in fact, if you use them both, it adds additional simplicity and they work really, really well together. There's over 10 active working groups from 450 contributors. There are 15 plus active repositories with over seven Knative based offerings from vendors such as Google, IBM, Red Hat, VMware, et cetera. Okay, so what's next? I'm sure that you've left this presentation being really interested in how Knative eventing works and perhaps Knative more broadly. You can learn about more at knative.dev slash docs link provided here. This contains everything you need to know on how to get started with Knative. If you're a GCP customer and want to do a quick try of our implementation, which will get you familiar with those key concepts and you can extend and play from there, here's a link to our quick start documentation which can be found at cloud.google.com slash doc slash event slash anthos slash quick start link provided here. And then to get involved, view us on GitHub. There you can find out things like how this is implemented and start contributing, which is really exciting. Now a couple notes of some other open source events that are happening provided by Google open source live. On November 5th, we have Go Day from nine to 11 Pacific time. In this session, Go Lang experts will share updates on everything from Go basics to package discovery and editing tools. You'll hear from our partner Khan Academy who will walk through an interesting use case about how the organization is using Go to save time and money. On December 3rd, there is Kubernetes Day. In this event, Kubernetes experts at Google will cover the life of the Kubernetes API, admission web hooks, how apply works and the distributed value store ETCD. So we hope to see you all in those events as well. They're surely to be interesting. Again, thank you for your time. This was great. You've been a great audience. Really appreciate it. If you have questions, we want to hear from you. Feel free to reach out to me directly as I would love to engage with you. I'd be happy to answer any questions that you have about how you could apply this technology if you want to understand more about the examples and samples we showed you today. We're happy to provide. And most importantly, I'd love to hear about your use cases and scenarios. What are you looking for Knative Eventing to solve? Do you have a problem? And you're not quite sure how to solve it. Do you have a problem that we're not well positioned to solve yet? Do you have applications for this technology that you're excited about? No matter what, I would love to hear from you. Your feedback really helps us understand where to focus our efforts and it's great to engage with the community. So Brian, we already have a few questions here in the Q&A. I can read them out loud and we can go through them if you think that's a good plan. So first question is, great demo of Knative Eventing with GCP. Just to double check that nothing in demo setup is tied to specific GCP capabilities and it's possible to implement on other clouds using upstream K8s. Thanks. So let me answer this first, Nick and you can add additional color if I missed anything. So yes, everything you saw here can be implemented on other clouds. In fact, most of what Nick showed today was not using the GCP G-Cloud commands. It was using the Q-Cloud commands. The only exception to that is the source. So the GCS bucket was a vendor provided source. Now you would have to have an equivalent K-Native source written for AWS or that you wrote yourself for querying another storage location. So that would be the only thing you'd have to swap out. That was a vendor provided source. But I could imagine that other clouds may have something similar and you could write a source to swap that in and everything else is all using K-Native primitives. You saw the implementation we showed was also using the GCP broker, but nothing substantive about the demo you saw required the GCP broker. You could be using a different broker as well. There might just be some idiosyncrasies about things like the tracing support. I don't have anything to add to that. That's a good answer, Brian. Let me go ahead and read the next one from Rahesh. Does K-Native have any advantage over Kafka? So again, let me start here, Nick, and then you can add more. I think I would like to freeze your questions slightly differently, because the broker which utilizes the message queue, there is a Kafka broker, right? So in that sense, it can be using Kafka, right? So what does it provide to extend Kafka? And it's all about having that decoupled system where you didn't have to worry about how to communicate with Kafka. All of the messages are sent and translated via the source to the cloud events format and then consumed with the cloud events format so that your consumer applications, your containers only need to know how to use the cloud SDK, the on-martialing library, in order to understand the details of that event. So it was all somewhat standardized and the individual developers of services don't have to have an understanding or appreciation for how that message is delivered nearly as much. And that's a very high level answer, but just the Kohl's notes there is it's not about, does it have advantage to Kafka? It's what does it add on top of maintaining your own Kafka implementation? And it's that we try to maintain, the components are maintained for you. So there's also a couple of Kafka related questions there by Manish and Jing Dong. The first one is a Scandinavian replacement for other pops up tools like Kafka, SureService, RabbitMQ and the other one, which I think you partially answered just a bit ago. Is it possible to elaborate on use cases between using Kafka and Kinead? I think you're muted, Brian, right now. Oh, yeah, I think those are similar to the first question. So I'll answer them in the same way. PubSub, for instance, is what's used by the GCP broker, Kafka is what's used by the Kafka broker. So it's not a replacement thereof, it's an extension too. Thanks, Brian, let's read the next one. Could you speak to a bit about durability and message processing guarantees supported by the broker components? So I don't have specific answers on that one. In general, I think it depends on which broker component, right, so in the GCP broker component, I showed in the diagram that there is an error root topic that's not the technical name, but that's the idea. And there's also, you can utilize a dead letter queue to react to failures. But I think the idiosyncrasies are gonna depend on the broker that you're using. What I would suggest is why don't you reach out to me offline, so Brian Zimmerman, Brian with a Y at google.com. And actually, if you're still seeing my screen, it's there on the screen, but reach out to me and I will get you a much more well-formed answer, very likely pointing to the documentation online that will specifically answer your question. Thanks, Brian. Let's go to the next one. So does Knative require a broker like Kafka to work or is it bundled with its own broker? It does require some type of messaging system, right? So it doesn't have its own messaging system, the channel basically, right? So there's a Kafka channel, you can use a PubSub channel and those are implemented in the different brokers like the GCB broker or the Kafka broker. There's also an in-memory channel so that you don't have to use a different type of channel which just keeps the events in memory. It is not recommended for production use, but if what you're getting at is, I'm trying to develop on my workstation and I don't wanna stand up all these extra components just to get my container written, that's where the in-memory channel can help. But as far as production implementations to ensure delivery guarantees, it's not recommended to use in-memory, you should use another channel like Kafka, PubSub, et cetera. Thanks, Brian. Last question here on our Q and A. Does Knative support AMQP 1.0? That's one I'll have to get back to you on. So send me an email, Brian's ever met at Google.com and I will be happy to answer that offline. Unless, Nick, you have an answer off top of your head. I don't have an answer from the top of my head, no, sorry. I'll answer similarly to the next one. How does Knative compare with other event-driven frameworks such as Vertex? I'm not familiar either with Vertex, yeah. Yeah, but reach out to me. I'd love to have a conversation of what are the advantages or disadvantages of Vertex that you see and to have a good discussion of what Knative eventing does or doesn't compare. More questions coming in. How can Knative be extended to support serverless event-driven code processing in general? I think GCP has the capability already. I'm just trying to think through what specifically the use case that you're interested in. Again, I'd be happy and love to talk to you about this offline in a deeper way. I think in a high level, the approach that we're taking in GCP is that a lot of things that are happening in the environment will generate events and the sources that the vendor specific sources that we're providing are able to pull from that event. So for instance, I showed on one of the slides there's 60 plus services that are integrated through cloud audit logs or things like BigQuery or Firestore, et cetera. And so what you can do is as you're writing to a BigQuery instance, for example, that can generate an event that triggers your Knative service in GKE, for example. So I'm answering very GCP specific here and that's just because I have more familiarity there, but that approach can be extended to any kind of implementation. If the sources are there to latch on your workload's action to something that happened in the environment, whether it be on-premise through sources you build yourself or whether it be in other or other implementations. I think the model I described is gonna be consistent. What matters there is the sources because that's where you can latch on to things that are happening. I don't know if I fully answered your question but definitely let's kind of offline. Love to hear from you. Thanks, Brian. We have one more question there. It sounds like Knative works as an orchestrator for the long running transactions, SAGA, that span across multiple microservices. Is this the correct understanding? The SAGA pattern is a very specific thing so I don't wanna oversimplify it. It is basically Knative eventing does function as an orchestration feature. I think this is one best discussed offline if you don't mind reaching out. I don't think there's anything fundamentally wrong in what you're saying. However, I think that there's a lot of very specifics there. Thanks, Brian. One more question in there. Can you please give an overview of how Knative eventing fits into the overall serverless Knative framework? Yeah, so it's just part of it, right? So Knative has two components serving and eventing and so Knative serving is how those servitors are, how to control the ingress, the scaling, the revisions, the deployment, the management of services which running as serverless workloads on top of Kubernetes. Knative eventing is just the extension of that. So in a serverless way, if the only way to execute an endpoint is just by reaching out or execute a service is by reaching out to the endpoint directly that limits the decoupling that you can achieve here. Hence why Knative eventing was created as that right hand person to Knative serving. So it really just goes hand in hand. Thanks, Brian. We don't have any more questions or Q&A right now. Now, is there anything that was mentioned there that Nick that you wanna add more to or extend out of the questions that were answered or anything else you'd like to convey? I don't think I have anything else to add to what you said. I was pretty comprehensive, thanks. And I will say, I know I've said this a few times, but feel free to reach out. Often it's hard to understand the full nuance of the question in the 15 seconds. And I'd love to give you a lot more time to discuss things openly, bring in the developer subject matter experts as required. So yeah, definitely feel free to reach out for follow-ups on any of these questions or for anything else that comes up. We'd love to hear from you. And thank you everyone for joining us today. We're coming to the end. So if there's any other questions, happy to answer them. If not, we'll hand things back over to our moderator after Asin's here. Thank you from the end. Thanks everyone and thanks Jerry. Thank you all. Thank you all so much for a wonderful presentation. Thank you all for attending today. As I said before, the recording and slides will be available on the CNCF webinar page at cncf.io slash webinars. Everyone take care and stay safe and we will see you at the next CNCF webinar.