 Hi, everyone. My name is Nikhil Barthwaal, and I'm going to be talking about Knative. What Knative, and I'll go into details of it, of course, but what Knative is, community's framework for managing serverless workloads. The way I have structured this, this talk is I have about half of it is theories. I'll go through my slides, explain what I'm everything about Knative. And then the remaining half of it is demo. Now the demo is actually from the GitHub link that you see below. The demo itself is quite big, so I'm only going to be covering parts of the demo. I'm not going to go into details of the entire demo, of course, but you have the link, feel free to play around with it, and you have my contact information, so should you need to contact me, just feel free to bring me anytime. So let's get started. Introduction to Knative was Knative. So like I mentioned, Knative is basically a Kubernetes platform for building block, basically some building blocks of serverless. Now, Kubernetes is essentially a platform for platform. It's never really a starting point, but it's never really end game, but it's a great place to start. So it's basically think of it as you have Kubernetes as a platform, you have layers and layers, and then you have Knative on top, and you have other layers on top of Knative. So when I show the entire technology stack, you would actually see that, but essentially what Knative provides you is an extension components on top of Kubernetes for serverless computing. So Kubernetes is a de facto platform. It's by far the most open source, by far the most popular container orchestration system out there. There's a diagram of Kubernetes. It has pods, and it has all the containers where it runs, but essentially what it is is it's basically a descriptive definition of how your infrastructure should look like. You're mentioning the state of the system. It's a descriptive system, and what Kubernetes as such does is tries to maintain that state. So if you deviate off, it will basically take corrective actions to bring the system back to the desired state. That's what it is. It's an orchestration system, and it handles a lot of things for you. So it handles scheduling, scaling, life cycle, naming, discovery. You have the full list. It's open source. It's quite popular. I'm sure you would have heard of the name of it, but it's handling all these things for you. And there's a large body of Kubernetes ecosystem. It's by far the most popular or container orchestration systems. It's fully open source. So Google has GKE, Amazon supports it using EKS, Elastic Kubernetes Service. Azure has AKS, Azure Kubernetes Service. So there's a big ecosystem around Kubernetes. And let's switch to serverless. What do we mean by serverless computing? Now, when we look from an operational point of view, what serverless brings to table is it brings you all no-infra-management. It brings you an automated infra-management. It manages security. It manages the infrastructure, but the characteristic of serverless, unique characteristic of serverless is you pay for use. So typically, in normal container-based systems, you're paying for the capacity. In serverless, you're paying for the use. So if you're not using your container, or if you're not using your whatever function, basically it gets switched off. So it is handling and auto-scaling for you where it scales from zero to infinity or whatever you configure it to. So that auto-scaling is all handled for you. So you're only paying for what you're using. And from a programming point of view, it's service-based, right? And it's an event-driven platform. Service-based means you have these computational units. I'm going to use the term service in a little broad way. If you're using something like AWS Lambda, it basically means functions. If you're using container serverless systems, which is what K-native is, service there means containers. But service is more of a broad term. It's event-driven. So events are fired. You respond to those events. So it's an event-driven system. And something very particular about K-native is that it's portable. Now, typically what happens with serverless systems is there's a common criticism of serverless that's when they're locking. And the reason is that all the infrastructure is handled for you. So you don't really have, you don't get to choose anything or you don't really have any control over it. So the underlying system is basically handling. So you're basically tied to that system. Now, in K-native, you're not tied to the underlying system. You're actually tied to Kubernetes, but then again, it's an open-source system. So you're not really vendor-locked. That's why it's portable. Any cloud vendor or any kind of distributed system that provides Kubernetes is practically everybody. You can actually run K-native on top of it. That's why there's a portability aspect, which is very particular to K-native or serverless containers. So talking about containers, containers are industry standards. So you basically have pre-built images and you can pick any programming language, any library, even binaries for that matter you could actually build. So you can have these containers. And containers are kind of industry standard. So if you look at the trend, it's actually increasing. So it's basically an industry standard. And then let's switch gears and let's talk about K-native project. So K-native project, what it is, it basically gives you the set of components for serverless system. Now, prior to version 8, there were actually three components. There was serving, eventing, and build. And after version 0.8, there's actually two components. That is why you see that build crossed out on the slide. Serving is basically the base component that talks about all the scaling of your services. So that's serving. Eventing is the framework that lets you create events for which the services respond to that's eventing. Build was initially how you deploy the system. So going from source code to containers. Now, that has actually been deprecated. And the officially supported system is Tecton. Tecton is a separate project. It works with K-native, but it's a separate project completely. So we're not going to talk about Tecton too much. But essentially what it is, is it's a Kubernetes-based CICD system. CICD system. And like I said, it's open source. And what K-native is giving you is, it's giving you all the basic ingredients for serverless computing that solves the modern development patterns. And essentially, K-native started as a joint project, as a consortium between Google, IBM, Red Hat, which is the same now, SAP, Biverdale, and so on. So K-native incorporates all the learnings from all of these partners. And you have the website, k-native.dev. So k-native.dev. Let's actually check out k-native.dev. That's the website. So you have all the information here. So now you see the two main components serving and inventing. There's actually a Slack channel. You could actually go to the GitHub repository. So you could actually see k-native. Now mine k-native. So that's the GitHub repository for it. So you have the eventing and serving as the two main components. You have the docs and so on. So that's the website. So a lot of good information there on k-native project. Now what's the motivation for k-native? Developers want serverless systems. And the reason why they want serverless is they want to run their code. They don't want to handle infrastructure. And developers tend to be very opinionated. They have their own favorite programming languages and dependencies. They're passionately about it. So they want the flexibility to run any kind of language or any kind of framework that they choose on their platform. So that's what containers basically bring you. So that's what developers basically want. So that's why they love serverless is because they don't have to manage the infrastructure. Operator on the other hand like Kubernetes because Kubernetes is a great orchestration system. They don't want you to handle it. It's not the right abstraction for developers. But operators love Kubernetes because it's a very descriptive system. They describe the state. Everything is handled for you. So k-native basically sits between the two. It's built on top of Kubernetes. So the operators love that. Operators handle the cluster, the base cluster. And then on top of it is k-native. And the developers build their code on top of k-native. So it's the glue between the developer view and the operator view. So that's the motivation. So there are, like I said, there are two basically point of views. There are two ways of looking at it. So let's actually look at the developer view for k-native. Now, developers want to write code. So that's what they want. What they don't want is they don't want to handle the infrastructure. So they don't want logging. They don't want monitoring. They don't want to build all these images and so on. I just have my Docker file where I give the steps of how I'm going to build the container. Everything should be taken care of. I don't want to do it myself. I don't want to handle all orchestration. So that's what developers want. And that's the k-native view for developers. Now, going back to k-native views for operators, k-native is handling all the infrastructure complexity for you, right? So you don't really have to worry about too much. You just have to worry about describing your state, what you want. And k-native takes care of it. I mentioned that repeatedly, it's basically a universal system supported by all major providers, and that enables portability across multiple providers. That also enables portability across on-prem and cloud, by the way. And it's a very extendable platform with clear separation of concerns. It has an open source API which is pretty well documented. Like I said, it's a platform for building platform. So it has to be, by definition, very extendable. So there's portability associated with Kubernetes because it's offered by all cloud providers. And then k-native, which is built on top of Kubernetes. So let's look at the k-native stack. So when you look at the k-native stack, we start with underlying platform, which is Kubernetes. Then we have service mesh. Now, the default service mesh that comes with Kubernetes and with k-native, which is the one we're going to be using for a demo, is Istio. So you see the Istio there, but you could choose to use Glue, Ambassador, or any other service mesh if you want to. On top of it are the two or three, I'll call two, because the more recent versions only have two basic parameters that's serving an event. Tecton is a separate project by itself. So you could use it with k-native, but as I mentioned, it's not really part of k-native project. So you have the serving and eventing. That's the base layer for k-native. And then on top, you have this different products that are built on top of k-native. So you have Google Cloud Run, Google Cloud Run on GKE. Now it's renamed as Google Cloud Run for Anthos, IBM and Red Hat being partners. They have their own products, Kubernetes service, OpenShift, TriggerMesh, SAP Chima, such and such. So this is the serverless portfolio on Google Cloud. I'm going to be focusing mostly on k-native because I don't want to make this talk very specific to Google Cloud. I want to kind of keep it broad. So k-native is like the open source, so that's broadly applicable for any vendor. But you do have a managed, it's not technically correct, but think of it as a managed k-native service. Now, the reason why it's not technically correct is because the underlying code between Cloud Run and Cloud Run on GKE is a little different from k-native. But what Cloud Run and Cloud Run on GKE does is basically giving you the same k-native capability that you have running on-prem on your clusters is managed for you. They're both very similar to each other, but in Cloud Run, you have everything fully managed, so you don't really have a control of the cluster. Whereas Cloud Run on GKE or Cloud Run for Anthos, what you have is you have basically running on GKE cluster. Now, the reason why you wanted to sometimes run on GKE cluster is because a lot of clients, a lot of companies already have a lot of stuff built on GKE, so they want to continue using that platform because they are already using it in the use state. Also, Cloud Run on GKE, you have the access to underlying cluster, so that's good that you have more control. Of course, when you have more control, you obviously have to manage that infrastructure, so there are upsides and downsides also. So if you want a fully managed completely, I just want to run my code and I don't want to worry about anything, that's Cloud Run. Now, the API, so Knative is a product in itself, but Knative can also be thought of as a reference API. And the reason why it's a reference API is across all of this portfolio, it's the same API, so what that gives you the advantage you have is you can take a workload on your on-prem running Knative and you can take it to the Cloud and run it on Cloud Run and bring it back. So it gives you the possibility of having a hybrid Cloud, you have part of your infrastructure locally, part of your infrastructure on Cloud and you can move the workloads across each other seamlessly. So the hybrid Cloud capability comes up and you can obviously go and if you have AKS and you're running Knative on other Cloud vendors, you can actually take the workloads there also and move them across on-prem or different Cloud vendors, so that that possibility also exists. So let's go into details of Knative components now. So let's talk about serving first. Okay, so what is serving? Serving is basically the one that handles your components, right? So it's talking about deployment of containers, it's talking about scaling. Now on slides I mentioned zero to n, but mind you this is all configurable because one of the common criticism of serverless is that it has a whole start problem. So sometimes you don't want the component instance to go down to zero because it takes time to come back up, so to prevent you might have to configure or change sometimes on the scaling of one to infinity or one to whatever. You probably want to have an upper limit also because you want to have some kind of cost control, right? I don't want to have tens of thousands of containers running and then I have to pay for them also. So sometimes people like to have lower and upper limits, but theoretically you can have scaling from zero to infinity. It has an inbuilt configuration and revision management and it also enables you to do traffic splitting divisions. Now why is that important? It's important because you do want to have some kind of sometimes gradual deployment, right? I release a new version, I'm not sure if it works. I'm going to have traffic serving 10% and the 90% should go to the one that works. So you can split the traffic and gradually you can make the traffic 100%. So you have different traffic splitting, you can do all kinds of A-B experiments. In my demo, depending on how much time I have, I'll actually show traffic splitting between different divisions. But yeah, you have all of these divisions and you have traffic splitting. Knative by design was something like a loosely coupled component. So you can pick and choose the components you want and replace the components you don't want. So for example, if you don't want logging or monitoring, you can take out that component and replace it with your own building. So it gives you that flexibility. It's a it's a pluggable system and we already talked about auto scalers, auto scaler. You can configure it or you can swap it out for your custom port depending on what you want. So the system is very, being open source, it's very open by definition and you can mix and match things the way you want. Of course, mixing and matching things means more operational work but that's the price you pay for more flexibility. So Knative serving primitives. I'm going to show a demo how traffic splitting and how different revisions etc. work. But essentially you have a configuration, which is a desired state of application that kind of follows the 12 factor methodology. And then you have all of those deployments as separate revisions, right? They're point in time snapshots for your code and configurations. And then you have wrote that actually maps traffic to these divisions. So that's what Knative has. And like I mentioned before, you can split between multiple revisions in my demo, I'll try to show that. Moving on, let's talk about the eventing framework. So Knative eventing. By definition, serverless systems are loosely coupled event-driven platform, event-driven architecture systems, right? So you have events, you respond to that events and you bind these event producers in different services. So you know you have a broker, you have a channel, again demo things become more clear, but you have like a broker, you have your channel, you have different event types and you define all your custom event pipelines to connect to your multiple services. So events come and you respond. You have all sorts of different event sources, right? You have like the messaging bus for Google, which is pop serve. You have like Kubernetes event source. You can have GitHub. You can have Amazon SQS source and you could even have Kafka. Kafka is a very popular message broker, which is again open source. So simple diagram. You have different event source and you have a broker and then you have services that actually have triggers associated with it. So services would subscribe to a trigger and trigger would basically collect the message from broker filter it and deliver it to the service. Now, this is a simplified picture. You actually can have much more complicated in the sense you can have channels that subscribe to multiple services. The demo has that again, depending on time, I don't know how far I'll get, but I can show you some of how the channels work and you can have services calling each other. So you can build like a very flexible event event platform here with this eventing framework. So you have different event sources, right? You have Apache Kafka, AWS SQS, Conjure, pop serve, GitHub, just about everyone. There's a full list here. You can write your own event sources for custom event source. Hopefully you should have to do that because all the common event sources are already captured, but you could do that if you choose to. So what are the eventing use cases when you could have different use cases like you can have a crom job. A crom job event source is the one actually I use on my demo, but you can have to write weekly reports. You can press us with IOT events, pop serve, Kafka, so on. You can even have like work, a workflow is defined on your GitHub by GitHub. So there are a lot of examples of how eventing can be used. Below link would basically tell you. It gives you some of the good use cases for eventing. But let's talk about building. Now, I'm not going to talk too much about the building because the original build has been deprecated. So before 0.8 was built. Now it's post 0.8. It's tacked on. Build was basically how you would build from your source code to containers and build pipe. You can define build pipelines, but that's all now kind of deprecated. And it has been replaced by tecton. I'm going to cover tecton very briefly. I won't have it as part of demo because it's not strictly a k-native component anymore. It's a separate project of its own. But what it is is basically a Kubernetes-style CICD deployment system. Typically, the common criticism for something like a traditional Jenkins is you have these agents, right? And you have every time your build request comes, these agents serve the request. But the problem with these agents are they are static. You have fixed number. You can't easily or automatically scale up and scale down. And if you're not using your system, you're still paying for them. With Kubernetes CICD, you can scale up and you can scale down these agents as you want. So if you have basically, you have a controller, of course, or your master, but then you can have spin up n number of slaves and scale up and scale down as per your needs. So that's what tecton brings to the table. Jenkins also has, I think Jenkins X, which basically shares similar goals with tectons. Even the port between the two is actually quite common. So Jenkins X basically takes that Kubernetes-style CICD on top of the traditional Jenkins. So K-native community, right? These are some of the stats. They're actually old stats. So now hopefully things are going to be much better. We have a lot of contributors. We have a Slack channel. I think I have a link to that Slack channel. So I welcome you to join and contribute. But a lot of contributing companies will request. K-native community is pretty big today. It has autoscale managed workloads. It provides several benefits like one step deploy and so on. And as we move to the demo, so I'm actually going to switch to the demo now. I'll show some of the aspects of K-native. I have about 25 minutes remaining. That's good. So the demo I'm actually going to be using is, if you go to my GitHub, nickilbert.com, Slack GitHub, and the link also was there on my slides, the first link. That's the demo. So I have my slides here. That's the link for the demo. And you have like whole different examples for setup, eventing, and so on. Now, I've already set up my cluster. In the interest of time, I would not be repeating the setup steps. But there's a setup folder which basically talks about what steps you need to do. And these are the scripts that run a bunch of commands. So if you want to see how setup is done, you can actually open up, let's say create setup. It will show you all the details, how would I set up my cluster? So everything, all the information on how you set up is here. But for now, I have my cluster and I have a VM running with all the cluster and everything set up. And I'm going to be taking some examples from serving and some examples from eventing. I'm going to be skipping examples from build and I'm also be skipping examples from Google Cloud because I want to focus more on the open source part of it because not everybody is in Google Cloud and I want you guys to have information that you could probably use, go back and take, use it on your job, even if you're not using GCP. So what I'm going to do is I'm actually going to be mostly focusing on these two parts of it, sorry, these two parts of it, serving and eventing. Okay, so let's go to some simple example. So let's go to hello words, go to hello words, the starting point for everything. And I have a bunch of my simple services and I'm going to show what it is, but gain it is serving has a lot of samples, right? So you can actually go to the samples and it has wonderful examples with different programming languages or what you want to use. But going back, we'll create one simple example of a hello world service. I have two languages and C sharp and Python, I think I'll probably use Python here. And I have this small script. So it's all, it's pretty simple takes hello world. It has basically a variable defined for target. And all it does is, you know, defines hello world V1 gets the world and returns a simple string. So we'll see a demo of it. And there's an associated Docker file that shows how to build this code. So we are actually going to build us. Okay, so let me go to my terminal. Okay, this is a little better. And something wrong with the instance. Sorry. I think I need to create a new instance, maybe set up SSH. Translate the time. Probably deleted everything. Did I, did I, did I, did I? Oh, shoot. Okay, I'll probably switch back to a terminal than in that case. That's fine. Because I have everything set up on my terminal, but I will zoom the code because I don't want. This should be terminal of apologies. I think I did some configuration changes. So let's actually go to serving examples. I can do everything from here. Hello world, right? So we're going to have a simple, yeah, simple Python service. So we'll just look at this code. Hello world, Python. Okay. So that's my Docker file. And what I'm actually going to do is I'm going to build this image. And I'm using Google container registry as my cloud container registry. But if you have Docker Hub or something, you can actually use that one. So it doesn't have to be Google. It just would be any container registry. So I'm going to build this. And my project ID here is, okay. So I'm going to build this image. Okay. I'm so sorry. I think I, the images, there was some reset done. Okay. I'll try to install it. If it works, good. If it doesn't work, I'll just walk through the example. Sorry. I wish I could have shown. But I had everything set up in my VM. It just, everything got deleted. I don't know how it got deleted. I can set it up again, but it probably will take time for me to do that. So that's it. Is Docker human running on this? Okay. Let me actually walk through the demo rather than showing because the demo is long. I won't be able to complete it anyway. So let's actually walk through the demo. So you're going to build this image. Okay. And you're going to push it. And once you push it, it actually will come here. It actually would come here in the container registry. Okay. The container registry and container registry would have that image. And then I'm going to apply this file. And when I apply this file, let's look at this file, scan it in service. It says, take this container. So this is my project on my GCP. But if you have, this is basically a location of your container registry. You take this image and you define an environment variable called target. Okay. Target V1. And when you actually apply, you could actually see all of this running. Let me actually talk. My installation could probably take time and will eat up most of my time. So my apologies. I could have just picked up this VM. I'll probably refresh this image, but they won't have time to do it. App Engine, Compute Engine. Yeah. Or I can just quickly see if I can get things working pretty fast. Yeah. It's going to be, I can set this instance up fast enough. So I wouldn't worry about that. Okay. Sorry. I will just walk through the demo. I don't have a lot of time anyway left. So I'll just walk through the demo. So when you deploy the service, you would have all of these parts running and they'll show you a status. Okay. So this is how you do. And then to test the service, you just get the external IP. So that's the command for external IP. And then you make a request to the service. So you say, curl the service and zip is basically a service that what it does is, if you have an IP, it will basically convert it to a URL. So you can actually use zip and it will display hello V1. And V1 is basically the environment name. Now, if you go back to this example, and let's go to a little bit more complicated examples, I can change the configuration. So here I'm talking about multiple services. So here I'm actually going to be talking about another same service, the same Docker image, but the variable name will be V2. It won't be V1. So the variable name is V2. And I go back. And now I'm going to deploy the second service. So this is the YAML file we saw. And it will have the pods. And if I curl them, it will say hello V2. I could change the container image also, but I'll just use the same one and I can have a V3 also. So it would have like biBP. You can change it whichever way you want. Now, this is an example of traffic splitting. In traffic splitting, what you do is you have this different current and you say you define and this is the part I want to actually highlight. Let me actually highlight this part. I should probably zoom out. But yeah, this is better visible. So current 100% latest zero. So I can apply and I can have all of these services. And then if I deploy this new version, you would see that I'll have V1, V2, you know, statistically I can split this traffic 5050. So now here I'm splitting the traffic 5050. And I have two different revisions names. Every revision would have a different name and I can just do 5050 and apply this change, split traffic. And then if I run through multiple times, statistically I should get like a 5050 kind of a distribution. So this is going to be a traffic distance example. I can configure the auto scaling. And in the auto scaling configuration, what I have is I can define some parameters. So I have the four to your load test tool. I have these parameters. It will actually scale down to zero. And how you configure it is you define these min scales and math scale. So if I look at the service YAML file here, it defines a target of one. And then it says min scale is one max scale is five. So now my maximum auto scaling would maximum have five instances and minimum one. And this is important to prevent cold start problem. Now, usually what you have is in a big system, you have multi-tier systems. So you have tier one, which is where the customers would basically interact. And then you have tier two, tier three. And typically you don't want a cold start problem on your tier one services because that impacts your availability. That impacts directly to the customers. So it won't be available. It will be slow to respond. It takes time to boot up. The customers would notice it. So tier one services generally should not be auto scale to zero. They should generally have at least one to prevent cold start. You can have the tier two, tier three services in zero to five. That isn't a problem. So this is your auto scalar. So once you do that, you basically would have, you could scale down to five or you can go down to zero. So now that's your auto scaling. This is the Cloud Run deployment. Now I could actually deploy to Cloud Run. These are my three Cloud Run portfolio services. And I can have, I have a defined G Cloud. Cloud Run as of now is only available in certain regions because it's not, it's not really, it's, it's released, but it's not available in all regions. So I can have something like the Cloud Configuration Activate. And then I push the container to the registry. I define the project. I submit attack and I deploy. Okay. Let's see if I can just install. Probably do a demo on Cloud Run. That I should be able to do it rather easily. De-native would be a rather more difficult task. Yeah. I don't know. My instance got rebooted and I think everything, everything basically all my files and everything went down because I had the entire cluster and everything set up. So, yeah. So I'll install the Docker or I could just and I do that. How long would it take? I should be able to do something. Yeah. So let's do the background. Background is happening. Install DockerDemon. Okay. DockerDemon. Is it coming? DockerDMG. Let's see if I can do something here. Anyway, let me, sorry. I don't think I'll have time to fix this. Okay. So deploy to Google Kubernetes. You actually have a choice. You could do Cloud Run. You can do Cloud Run on Anthos. You can have the cluster. You pick and you would deploy and it gets deployed to the region and then you're like, great. It's deployed with 100%. And then Cloud Run, you can actually access Cloud Run from here. So you go to this panel, you go to compute and give Cloud Run. And when you go to Cloud Run, you actually would see the instances and manage customer service. Can I do it from, I don't really have allow on authenticity. I could do that from here, but I don't really have an image on my registry. And you would see all the revisions here. And then once you click, you actually get some XML. So this is the K-Native Equal, sorry, YAML. This is the K-Native Equal and YAML API that you have. So this is Cloud Run. Okay. You would basically see the YAML part here. And then you can test the service in the sense you can have, you do a simple testing. You can have update target variables and so on. And then you deploy to Cloud Run. Okay. So GRPC eventing works the same way. You basically have a broker trigger and a service. Okay. Like it's open source. So sorry for the demo mess up, but you could just clone the repository and play it with yourself. I'll just go through it. Okay. You have commands. So you just want to check the eventing pods are running. Great. You can check the broker in the name space to get the broker. You have the default. You have event display, Python. So you have some sample code for event display. So what you have here is you would have something like, you know, put a message, eventing display message, and it'll just have a message here. And then when you push and deploy, you put the K-Native K service. So this actually talks about the event displays. This is basically the service name and the container. And then once you have the trigger also, you actually, if you do a curl, you have to install the curl pod. And then when you do a curl and you put this message, you actually would be able to get this message. So if you do like the locks for your container, you actually will see a message here. Okay. So that's basically the eventing. Let's go back to the docs. So you have simple delivery. Okay. Simple delivery source service. You have the source and the service. You have the YAML file. And yeah, you have the YAML file here. And then what you would do is you would apply this and you verify that the pods are running. And once they are running, you can actually see hello world from Kron message. So that's, that's basically a simple delivery. Now you can go into more complex delivery. So what is a complex delivery? In a complex delivery, you have a concept of channel. So channel can basically have multiple subscription in, it's not exactly similar to Kafka, but there are similarities or channel. Basically you could have multiple subscriptions or services, subscribe, you define a channel. It's a persistence forwarding layer. There are a number of channels. So let's actually look at available channels. So it's pubs up Kafka channel in memory channel. We'll use in memory for the demo, but you create the channel. You have the source code in your different services and subscriptions. And once you apply all the subscriptions, you actually will see these messages coming from containers. So that's, that's what venting is. And then you have complex delivery with reply. So with reply, you basically have source channel and services and services can have a reply to the subscription and another service channel, which the third service basically would get this message. So here in this, if you follow these instructions, you actually would see that you would get these messages as hello messages. And then you have the reply. This is Canadian reply. You also have an example of broker and trigger, right? So you have broker, you have different triggers, you install the broker, you have those different sources, you install the triggers and then you get these messages. So I'm actually pretty much running out of time here. So feel free to play around with the demo. You have all the instructions you need here. Okay. If you don't have a Google Cloud account, you actually can sign up for a trial account. They give you like some, I think $300 credit or something. And it will, it'll serve you well for a couple of weeks, actually. So you can actually play around and have the demo. You don't actually need to pay anything to use the demo FII. And my contact information is here. If you have any problems, feel free to contact me. Thank you very much. And I'm open to questions. Thank you.