 The way I have structured this talk is I have about half the time I'm going to be talking and remaining half the time I have a demo now. I'm going to be only showing a part of the demo because I only have so much of time, but the demo and everything related to it is on GitHub, so I'll share the link and then feel free to play around with it. So let's get started. So I'll start with the serverless model, what do we mean by serverless model? Now, when we look at serverless model, there are actually two dimensions to serverless model. One is the operational model and one is the programming model. Operational model is basically from an infrastructure point of view. The person who maintains the infrastructure, what serverless is really offering you is no infrastructure management, so you don't really have to manage it yourself. It obviously has to be managed, but it's handled by the system, which is a good thing that you don't have to do, but it is a bad thing in the sense that if you want to optimize and so on, you have limitations because the system is kind of taking care of it. Because the entire infrastructure is handled by the system, you really don't have to worry too much about managed security. And the third and the biggest benefit of serverless is you pay for usage. Typically in a standard VM or container-based systems, you're actually paying for the capacity in the sense that once you deploy your virtual machine to the cloud, the machine is running, whether there's load or whether there's no load, it's always running. So you're paying for it, even when you're not using. When we talk about serverless, we're talking about auto-scaling. So in the sense that if you don't really use your service, it scales down to zero. And if you're using it, it scales up to a limit that you define or basically to a limit that is required. So theoretically, you have zero to infinite scaling. Obviously, there are practical limits on how much you can scale. And also, there are certain requirements in the sense that sometimes you may want to have auto-scaler configured to a certain limit only because if you scale too much, you're paying too much to the vendor. So that's from the operational model. Going to the programming model, the way serverless works is you have a set of services. Some people like to call it function. Some people like to call it service. It's basically the same thing. If you're coming from an AWS Lambda, they they call it function. Even Azure has multiple offerings. The Azure Container Instances and Azure Functions. The Azure Functions would call the service as a function. Whereas containers, it's probably called just as container. But essentially, a service is a computational unit. And services communicate with each other via events. So you have an event of an architecture. So the events, they basically go and talk to the service service response to those events. And as an event comes, if the service basically is not operational, it will wake up. So if it's scaled down to zero and an event come, it basically will wake up. It will process the event and return the result if it has to. And then scale down back to zero. When it's zero, you don't pay for anything because it's zero. There are no instances running. And if there are a lot of events coming up and a single service cannot handle, it automatically scales up. So that's how the auto-scaling works. It's an event given platform. And the third, which is pretty characteristic of serverless containers, which is the domain I'm actually talking about, is that it's open. You're not having any kind of vendor lock-in. Now, typically in the serverless model, this is actually a common criticism that you have a vendor lock-in. And the reason is that when your infrastructure is handled by the vendor, you're dependent on vendor for everything. And not when vendors don't have identical offerings. They are similar, but they don't have identical. The way AWS Lambda would work is somewhat different from, say, Azure Functions would work. It's somewhat different from Google Cloud Functions. So there is an inherent vendor lock-in that you have when you use a typical serverless. But here, we are talking about more of an open. And I'll explain why this is open and why there is no vendor lock-in. Now, before I go into the explanation, let me just take a step back and talk about containers. Now, Docker containers is basically, think of Docker containers as some kind of mini virtual machine. That's not really a virtual machine. It's a stripped-down virtual machine. And the beauty of containers is that you could, there's a container for like, there's an extensive base images. So there's a container for just about programming language, any binary, any library. In my demo, I'll actually show a little bit. But you can really pack your computational unit into this container and deploy it anywhere. And containers are, the Docker containers are basically an industry standard. So no matter which cloud vendor you go to, or in your own local on-premise system, containers are supported. And that is the reason why this particular model that I'm talking about is open in the sense that it's based on containers. So your services, it's serverless container. So your services are packed up in containers. And these containers are open in the sense they are vendor agnostic. And you can deploy in any cloud or any, you know, on-premise infrastructure. Now, there's another reason why containers are very popular. And the reason is that it packs the whole, your functional business logic, as well as the runtime environment into a single unit. Now, your code is running on some environment, right? And there are often differences in environment and the code might behave differently. And that's a problem in the sense that you develop your code, you test it locally, everything works fine, great, yes. And then I deploy it to the cloud and suddenly things start falling apart. And I have no idea what happened. Well, what happened was the environment where I was testing was different from the environment where I'm running. So due to the difference in environments, the code behaves differently. These are very problematic situations. The good thing about containers is that environment is packed with your code. So the way system behaves on my local machine is the way it's going to behave anywhere I run. Because the entire runtime, all the base libraries and all the dependencies, everything is packed in one unit. Containers are kind of like I mentioned before, industry standard. You could see the Google trend, the way it's going and yeah, it's practically used everywhere. When we have talked about containers, we have talked about deployment. The question is how do we deploy containers, right? So we have these units of computation containers that we can pack our service, right? How are these services orchestrated? And that's a whole separate problem. Now, there are several solutions, open source, commercial, whatever that deal with this. So the one which is by far the most popular is Kubernetes. So Kubernetes is an open source project, which has like, you could see the stats, how many contributions, how many users. If this slide is a little bit old, so I would expect these numbers to go up. It's just about pervasive everybody, not everybody, but most of the people who use Kubernetes is by far the most open container orchestration system. There are other orchestration also. There's Docker, Swarm, there's Mesosphere, which is like competing. I think Mesosphere came from Twitter. So they had their internal system and then I don't know if they open sourced it or they reinvented it. So Mesosphere is something that came from Twitter. And I think the guy who built Mesosphere then started his own company. Then there is Docker, Swarm, and Kubernetes comes from Google. In fact, Google has its own internal container orchestration system called Borg. And Kubernetes is an open source version, although it does not share the same code base as Borg. Borg is written in C++ Kubernetes, is written ground up in Golang. All the Google proprietary information was basically removed. So Kubernetes is completely open source and I think now it's handled by CFCF. So what Kubernetes basically does is it's a descriptive system. It lets you describe the state of your infrastructure. And when you're running services at scale, things happen that are beyond your control. I mean, machines crash, outages happen and so on. So essentially what Kubernetes does is you describe your ideal state of the system and Kubernetes will try to maintain that state. So every time there's a deviation for any reason whatsoever, maybe an outage happened or some container crash, it basically will shut it down and it will restart another container. So it's managing all those containers that is kind of orchestrating. Think of it as an operating system for data center. That's the right way. And there are a bunch of things that Kubernetes does. It's scheduling, life cycles, scaling, load balancing, provides you logging, monitoring, all sorts of things. But basically what Kubernetes is is kind of a platform for orchestration. Now Kubernetes is a platform. So it's a great point to start and build other platforms but it's a very open system. It's probably never, people don't use Kubernetes as it is. They usually have a layer on top of Kubernetes. So it's people describe Kubernetes as a platform to build platform. So it's never an end code, but it's a great starting point to start. And then you can extend it. It has an open source API. So you can just extend it anywhere you want. And one such extension is Knative. Now Knative is basically what Knative is, is an open source. It's a completely open source project that lets you provide all the buildings for serverless. So it's a Kubernetes based serverless programming model. And I'll go into the stack and details what are different components of Knative, but that's what it is. Primarily Knative has two components. The early versions of Knative prior to 0.08 versions, they actually had three components, serving, eventing and build. Build has since then been deprecated and has moved to its own project called Tecton. I'll briefly talk about Tecton and Build. I won't cover too much because they're not really part of Knative. But these are the primary components. And what Knative is giving you is all the ingredients you need for serverless computing. Knative itself was developed by a consortium of leading companies, Google, Pivotal, Red Hat, Uber, SAP. You can see the names. And that's the website, knative.dev. I'll probably go to the website at some point later on. But Knative is a completely open source system. So it implements all the learnings from all these industry giants. Very briefly, what serving is, serving is how you basically run your container. So all the aspects of auto scaling, routing, everything is handled by serving. It's serving a request. That's where the serving name is coming. Eventing is the event framework. So remember I said, in serverless computing, these services or containers stop with each other through events. So the framework for how you would capture the channels, the broker, all that framework is done with eventing. So we'll look at eventing a little bit also. And Build was essentially, prior to Point 8, Build was how you deploy your source code into these containers and deploy it on a cluster. That has now been taken over by Tecton. So we'll talk about Tecton. But let's actually look at Knative from an operator point of view. So Knative essentially, like I mentioned, is a platform that runs on top of Kubernetes. So Knative abstracts out all the operational complexity and it's just smooth infrastructure. Kubernetes is basically universally supported by all cloud major vendors. So Google has Google Kubernetes Engine. Azure has AKS, Azure's Kubernetes service. And I think AWS calls it EPS, elastic Kubernetes service. And then you have Kubernetes, which is open source. So you can deploy it on your own cluster, right? And like I said, it's a platform for building platforms. It's very extensible. It has an open source API. You can actually go and look at on GitHub page, the entire Kubernetes source code is available. So that's the operator point of view for Knative, which isn't really that interesting because the part that is interesting is the developer point of view. So let's look at the developer point here. So as developers, what we want is we want to write code. We want to build our systems. We don't want to handle infrastructure. I don't want to do any kind of deployment, monitoring, logging. I don't want, I just want to write the code and I want something that automatically orchestrates and it works. That's the developer point of view. That's what Knative brings on table. It's basically all the ingredients, all the boring parts that you need to do, they're all done for you. All you have to worry about is write your own code and everything else is taken care of. There's some part of demo depending on how much time I have. I can show some parts of Knative. Otherwise, I have the demo on GitHub, so I'll share the link regardless. So you can definitely play around in your spare time. So this is the Knative stack. Now, where Knative stack is, is it consists of the underlying platform, which is the Kubernetes, right? On top of Kubernetes is the service mesh. Istio is the default service mesh, although you can swap Istio with Blue or Ambassador. And then you have these primitives, serving and eventing and you probably have Tecdon separately. So that's the base. And on top of it, you have several commercial products or some open source products. So Red Hat provides OpenShift, IBM has Cloud Kubernetes, Cloud Run and Cloud Run for Anthos. So today, I'll basically be talking mostly about this, this and this. So this is the part that I'm actually going to cover today. Predominantly, my talk will be more on this side and a little bit on here. But of course, feel free to explore competing products also. They're also all great. So, yeah. So what are the benefits of Knative service? So let's talk about serving now. What are the benefits? Well, serverless provides you automatic scaling. So one great thing about Knative is it seamlessly scales up and down. The second thing, and again, depending on the time I could show some parts and demo, it has a bit in traffic splitting. So every time you deploy a new service, you create what is called as a revision, which is a point in time snapshot of your code. And then you have multiple revisions, kind of follows the immutable infrastructure paradigm. So you have multiple revisions. And you could also traffic split. You can say, okay, 10% of my request should go to the first division, 90% should go to second. Why is this important? This is important because of you want like a gradual deployment, right? If there's a bug or some problem with your traffic, you don't want 100% traffic to be deployed to the new code. So you could say, okay, let me start with 10, 90, make it 20, 18, and gradually, gradually, gradually, it became the whole code. So that's why building traffic splitting is very important. It integrates with network and service mesh. We already saw service mesh obviously is the underlying layer. It's transparent to you. So you don't really have to deal with service mesh, right? It's automatically done for you, just know that it exists. And it's easy to reason about an object model, right? So you can see services as different objects. And the whole Knative, when I look back at this slide, when you think about it, it's basically not one system, but a set of loosely coupled components. And you can mix and match these components in any way you want. So I can say, I want the serving, but I don't want the event. I want to use my own event framework. Or I want the serving and eventing, but I don't want to use this too. I can have something else in its place. So you have all the flexibility to pick the components you want. That's the way system is architected, right? It's loosely set, coupled components. You don't even have to have, you can have a serving version one and eventing version two. They don't really have to be of the same version. So it's a pluggable system. Autoscaler can be tuned in and out. I will show on the demo where this happens. And again, the subject to time available, I can give a demo on how Autoscaler can be tuned in. Now let's get back to events. So we've talked about the basic computation, units of computation serving. Now we are talking about eventing, right? So events, you basically have events, event-driven services, event-driven architecture. The system responds to an event. And what eventing framework does is it has a set of producers and it has a set of services. And if you go to Knative website, and hopefully I can probably try to do that. Knative, knative.tip, I would probably zoom this a little bit just to make sure. And you have this Knative website. So it talks about serving, eventing, and a whole bunch of things. You can go to the documentation. And it talks about installing, cluster, serving, eventing components. And you could see, I think somewhere here, you have whole sets of what kind of events and events registry and what kind of event producers are basically supporting event sources. Okay, that's what I was looking for. So you have different kinds of event sources like Apache Camel, Kafka, being a very popular open source. Okay, you can have like GitHub web book. So you can have GitHub workflows. Obviously you have commercial cloud offerings like AWS, Google. So you don't really have to use Google to use Knative. FII, it's a completely open source project, right? So there are different event bindings that you have. You can even go multi-cloud FII. And you can have custom event pipelines. So build, prior to point eight, we had a build component. Now we have moved to tecton. Build stuff is basically deprecated. So in the interest of time, I would not be talking about it, but essentially what it was was how to go from source code to container images on the repository. That's what essentially build was. And what it is replaced by is tecton, right? Which is an open API spec for CI CD pipelines. This itself is open source. So you can actually look at the source code and documentation and everything very briefly. The concept of tecton and tecton itself is based on Kubernetes FII. Tecton basically says that imagine if you have a traditional CI CD system like say Jenkins, so you have a master and you have some n number of slaves. The problem here is that your n is kind of fixed. Of course, you can scale up and scale down, but you have to do it manually. It's not that automatically. In the old system, I think now this kind of support some kind of automated scaling. And you are always paying for n agents, even if you're not using. So since Kubernetes and serverless, we are moving to pay as you go model, tecton kind of takes that philosophy and say, I have a master, but I don't want fixed n number of slaves. What I want is every time I need to process, I just create a slave, do the processing, and I'm done. So it handles all the up and down scaling for you. Of course, it's open source. It's configurable. You can have at least I want one slave. I want n number of slaves, all the settings you can do. But that's what it's doing. It's taking that pay as you go philosophy of serverless and bringing it to the traditional CI CD tools like Jenkins or TeamCity or whatever. So what is Knative's value here? Knative is this one step deploy, it auto scales, it manages all your workloads, all that aspect. So that's what Knative really is offering. So now we talk about Cloud Run. If you look at the stack that I was talking about, so this is Cloud Run and Cloud Run for Anthos. It used to be called Cloud Run for GKE. Now they renamed it to Cloud Run for Anthos. So this is the part that is built on top now. The slide is slightly incorrect and I'll explain why it's slightly incorrect. It gives you the impression that this actually uses this component, which is technically not true because internally it's implemented in a different way. But yeah, let's get back to Cloud Run. Cloud Run is in a nutshell, it is a managed Knative offering. So in Knative, everything is done for you but you still have a cluster that you have to manage. That cluster is not automatically managed. It assumes you have a Kubernetes cluster that you have full control over. So your responsibility for containers is that I'm going to do the serverless workload on top of this cluster. Problem A, we talk about serverless. I don't want to handle the infrastructure but now you're giving me the headache of maintaining a cluster. That's problem number one. Problem number two, I still have a cluster that I'm paying for even if I'm not using. What I would want to have is a completely serverless offering where I just don't want to do anything. I don't want to handle the cluster. I don't want to pay for the cluster if I'm not using and that is what Cloud Run basically brings. It is a managed Knative offer. And there are actually two set of products on Cloud Run. So there is Cloud Run and then there is Cloud Run for Anthos. It used to be called Cloud Run for GP. Cloud Run, so I mentioned about work, right? Google's internal container orchestration system. Cloud Run runs on work. So it's the internal one. You don't really have access to the clusters. You don't have to manage the cluster. You only pay for what you use. Of course, you don't have to manage it. You don't get to manage it. So even if you want to find you nor optimize, you can't do that. That's the downside of everything being automated. Cloud Run for Anthos on the other hand assumes that you have a GKE cluster and I'm going to run on top of it. Now, why would you want to have two separate products? Why can't I just have everything fully managed or everything in cluster? Turns out that GKE or in general Kubernetes, as I said, is a platform to do many things. So a lot of people would be using Kubernetes not just for serverless, but for n number of things. So they don't want that we have everything based on Kubernetes. And on top of it, you're giving me another system which is completely different. Let's say a fully managed one. So now I have to deal with two systems. What I want is I have this GKE cluster available to me and everything is done on top of it. So I want my serverless workloads to run on top of it. I don't want to deal with two separate systems. So for people who already are running their workloads in some capacity on GKE, you would probably want to go for Cloud Run for Anthos. So I'm going to run on top of GKE. And people who don't have GKE or who don't want to maintain any cluster would go on the other side and go for a fully managed Cloud Run offer. So that's why there are actually two servers. In fact, now when we count, there are actually three. So there's Cloud Run, there's Cloud Run for Anthos and there's Cloud Run fully managed. And the interesting aspect of it is that the API, so when you look at K&A, K&A is a product, but K&A is also an API. And I'll show in my demo that it's a reference API for all of these products. So I can have my workload running on-prem on Knative. I can move that workload to Cloud Run seamlessly. I can just have a hybrid Cloud where part of it is running on-prem and part of it is running on Cloud. And that is the ability that this gives you. This gives you a platform, a basically a way to do hybrid Cloud or multi-vendor Cloud. I can have part of my services running on Azure, part of my service running on Google, Carder, Riser, AWS and on-prem. And not only do I have all of these systems coherently behaving with each other, I can move my workload between any vendor or any platform without changing a single line of code. That's the beauty of it. So it's portability based on Knative, right? So Knative provides you with the API and runtime environments that you can run serverless on-prem, on Cloud, on Google Cloud, on Microsoft Cloud, on Amazon Cloud, on IBM Cloud, whatever Cloud you're running doesn't matter. So what Cloud Run basically gives you is it gives you an ability for container to production in seconds, one experience wherever you want. We talked about hybrid Cloud and natively serverless. Now, here I'm actually going to go into a demo. And yeah, okay. So it's mostly demo. Now I have about 28 minutes. So I can show you the part. So let's actually switch to a demo. So here what I have is I have already set up. This was that this you could ignore because I was basically laying around with the demo and testing the demo, but let's actually go to Cloud Run. So this is the Cloud Run. I have my GCP project. And what I'm going to do is I'm actually going to deploy a service. So this was the earlier service that I would just simply delete because I deployed it earlier. So that's all worry about that. And in my demo, I have an example. And there was a reason why I picked up this example. I have an example of an online service that takes a Microsoft Word doc and converts it into PDF. This is actually a real use case. Some customer wanted an invoice billing. And the reason why I picked up this example is because you cannot do this on any kind of cloud where you just have to upload. You can't do it on AWS. Upload your code. You can't do it on AWS. Lambda, you can't do it on GCF. You can't do it on Azure Functions. Because containers give you the ability to deploy a binary. Lambda or GCF or Azure Functions would not do that. You can deploy your code, but you can't deploy your binary. So let's actually look at the Docker container. So that's the container definition. Docker container. I take a Python Alpine. I have an app that is running and using this Python Flask framework. You don't have to go into details of the framework, but it's a very simple one. But the part I want you to focus on is this. And here, what I'm doing is I'm building this Docker and I'm adding LibreOffice, which is like open office with Microsoft Forms. So I have this binary installed here. And this binary is installed on this container. And then I'm going to deploy this container. Now, let me look at the code. So this is standard Python code. Much of it is standard. I am starting a server. So this is where I'm starting a server. I am going to, essentially, I'm going to give it an input. That is my doc file that is going to be uploaded and some housekeeping stuff, depending on what kind of request. This is the part that I want to focus. So, essentially, it will call the binary, the open office binary, and it will convert to PDF and then give it back, okay? So this is a simple demo. Now let's actually go and do that. And do that. The first thing I'm going to do is I'm going to build this container. I will remember what I mentioned. I can build the container. I can, I'm going to use a cloud service, Google cloud service for that, but it doesn't happen. You could build it local. So I'm going to say gcloud, sorry, gcloud. This is just a remote building service. I could have built it local. That's not a problem. So I'm going to tag my Docker to my registry, gcr, Google container registry. My project name, which is basically here, cloud run demo, cloud run demo, and that's all it's a long name. Just want to make sure everything is correct. Google gcloud will submit, I'm putting a tag gcr, my project name, cloud run demo, and my image name, okay? So I'm starting this, why are you, okay. I dispel the tag. I actually said, tar, it should be tagged and it did tell me that you're doing something wrong. So, okay. Now I'm building the container. And this is the container that I'm actually building using the remote service. I could have built it locally. And once this container is going to be built, it's actually going to go to my container registry. And then I can take this container and deploy it anywhere I want, in any cloud or any local premise, because like I mentioned, container is an industry standard. Now, it's building this as layers and it actually takes a rather long time to build. And the reason why it takes a rather long time to build, usually it doesn't, but in this particular case, the problem is that I have this entire think of it as Microsoft Office for Linux that I'm actually installing on the Docker. So it's doing all of that. That is what would take time. So I think I'll just wait for 60 seconds. If it does in 60 seconds, I don't know, otherwise I'll just move on. And okay, it did. Okay, it did pretty fast, which is good because I had some stuff cached out. So that's why I was able to do it. And then once it's built it, it would tag it and tag means it will upload it to a registry. And this is my container registry here. Okay, so I go to my registry. Where is it? Yeah, container registry. So these are all my container registry events and you would see down here. Now I did build this before, about 15, 20 minutes before, one hour before. I mean, I was just testing my demo to just make sure everything works. So this is actually an image that is right now one hour ago uploaded to the container. And as we are building this in couple of minutes, this, there will be a new image that will be deployed here. Now, I would, in the interest of time, I'll just let it build, but I'm not actually going to wait for it to build. I'll show if I can deploy this. So I'm going to go to cloud run. And I'm going to say, I want to create a service. So I'm going to create a service. I just pick a region. I can just pick the USI. That's fine, doesn't matter. So this is the fully managed and this is my GKE cluster. If I pick a GKE cluster, I have to either create a cluster or something. I don't have a cluster right now. So I'll just use a fully managed one. I just call the service. I'm just going to call it Agile India. Okay. Allow in unauthenticated invocation. Do you want anyone to cloud? Do you want some kind of security built in? It uses a cloud IAM, which is identity and authentication management, but let's just say I'm allowing anybody to connect. So I'm not worried about that security right now. So it says now deploy from one revision of the existing container or continuously deploy from source repository. So if you have a repository, just every time you push a new image, we deploy it. I am just going to do it one time. So it says demo containers. And this is the container that I built one hour ago because the one I'm building right now, it's still building. It takes time. We are just installing an entire office there. So I've selected this and I'm going to say create. So now it's deploying my service. So here it is. Let me go down a little bit. Let me move here. It talks about a service account that is created. So it's handling all the security automatically, container, the street variables. And this is the part that I wanted to show. And what is this YAML? Remember, I said the API for all the products is same. All right. So it's still using the Knative API. This is the YAML that is the actual Knative YAML. Okay. Which means I can cut this. I won't have time to do this, but I can actually cut and create a new file and paste here. And then I can call a kubectl command on an existing Knative cluster. And it would deploy this service that I'm right now deployed. It's already deployed on the cloud on your home premise. And this is the part that is a killing feature in today's cloud computing that people want hybrid computing. I want, I have a service. I want to run it locally. I want to run it on cloud. I want to run it on my on premise and I can move it anywhere I want. So right now the service is done. Okay. And that's the endpoint. Okay. So this is the endpoint. This is a service I've built. And it says it's a convert doc to PDF. So I'm going to open up my Microsoft Word file. I'm just going to create a, yeah, something wrong with my MS Word in this one, but I'm doing there. Hello, India. How are you doing? Put some random text, you know, the matter to mount a little bit. 36. I'll do this to 36. Okay. So something like this. And I'm going to save this file. And let me just call this on my desktop. I'm going to save. I'm going to save some, okay. I take this test file and I save this test file. I shut it down. I go here. I say, I want to convert this file. And it takes it uploads. So it now takes it to that MS Word, the open office binary. It is feeding that file and saying convert it to PDF. And you would have a PDF file very soon. And there it is. So now let me download this PDF file. I'm going to download it to, let me just download it to desktop only. And you would see that this download PDF file has appeared. That's the file. It just converted from MS Word. So this is a cloud service that literally right now, anybody in the world can go to calling this URL. Of course, you can buy your own custom URL domain and map the IP and you can do also function service. But what you have is like a service that was deployed in minutes, right? So that's the service and that's the API that you can use if you want to deploy it on another cloud, wherever you have a Kubernetes cluster. And there are logs, there are revisions. So right now we only have one revision every, like I mentioned, every time you deploy, you create another revision. So when I deploy the second time, it will create a new revision. These are logs that you can get. And there are some statistics that you can show. So right now, right now, because we just called the service, the service is running, but if I don't use the service, it'll actually shut down because that's what the serverless is. So from logs, right now it's running, so you won't see anything. But if you check back this link later on, you will see that this all will be shut down because, you know, metrics, okay. Yeah, because it's a serverless computing. If I don't have a request, I don't. I can actually look at, the stats takes a little bit of time, it takes a couple of hours, one hour at least to show. So right now I'm not showing you because I just deployed it here, but if you look at it over the policy, you can create, see how many requests you're getting, and you can get all sorts of logs. Now, meanwhile, this is deployed. So let's go back to our container registry, okay. I don't have a lot of time left. How much time do I have left? Okay, I have about five more minutes. So let me just go back. To my container registry. And okay, let's go in and see, I just deployed a new image. Let me zoom in a little bit. I just deployed a new image two minutes ago. This was the one I just deployed that was old. But this one is the one that I just built. This is the new image. And what I can do now is I can, so I deployed it using a UI, I can actually deploy using gcloud, beta, run, cloud, then you deploy image. I know what my image is, the image is this one. This is my image. Okay, I'll just copy. It's time, image. Deploy, beta, then deploy. So it's asking me the same set of push-ins, where do we want to deploy, fully managed. My project is already set up, because it's going to create with area. So that one I deployed in US, this one, let me deploy nations. It's going somewhere near India. I don't know where Asia or East Asia is, but somewhere there. Do I want another? Absolutely, I want that. And it's deploying it. So now it gets deployed. Just FII while it's going deployed, it's going to take a couple of times. I encourage you to look at Knative website. This demo, I've actually open sourced it. So let me show you the link. This particular demo is present here. So if you go to Cloud Run demo, I don't know if you could see the link, but I'll have this copied here. It would be on my slides anyway, but yeah. So this is the link. If you want to play with your demo on your spare time, please, you're welcome to. And I'll show you another one. I will not have time to show Knative, because what I wanted to show was how I can take this workload that I deployed on cloud, and I moved it on prem. But I would not have time, but I can definitely walk through a few things. So this is the Knative demo. So again, it's open source, so it's free to play around with it. That's the link. I'll include the link in the slides. And I have few minutes remaining, so I'll just walk through the demo very quickly, and then I'll take the question. So it's like a cell setting. You can set up your own cluster. It gives you all the instructions on how there are scripts that install the serving component, eventing component. These are all scripts, so you can actually look at this and see the code, and it's actually cube cut will apply, eventing latest release, with a release at 18.01 version. It's probably what the latest is. Yeah, so you build this, and then you can have all sorts of... Let's look at traffic splitting. So we talked about traffic splitting, right? So I can actually create something like multiple versions. So I can have multiple versions, so you have multiple revisions. Hello, World V4. I know World V1. And I can say I want to have route to 50% traffic on the first one, 50% traffic on the second one. And then if I run it multiple times, you will see statistically, if you run it long enough time, you should see 50% of the time V1 has been called, and 50% of the time V4 has been called. This is a demo called Cloud, and I kind of almost showed it. So I won't go into it. Meanwhile, this is now deployed, and this is deployed. So let me go back to my cloud, and let me... So I've seen the container registry. I am now going to go back to Cloud Run. I'm going to look at... It actually created another service. So that one I named at Agile India, this one I created, and this one, and see it's Asia East and this one. I could have actually created another revision for it, but you could have seen... Had I done that, you would have seen another revision here. Right now, there is only one revision. Anyway, let me just go past. I have about five-ish minutes. So I'll just talk for two more minutes and then I'll take questions. Yeah, so demo. So we can see the same thing for eventing. So you have a service. You have a broker. Think of it as something like Kafka, and you have a trigger. You can subscribe and vote. So if you go through this demo, you would see that you can run the pods and you can install the brokers, and then you can have some kind of trigger mechanism, and then you have another service called... Which all it does is it says hello from Canadian. So it puts this message hello from Canadian somewhere here. Yeah, this is it. This is the message hello from Canadian. And you would see this event triggering. So this would go to this broker, this and this service, which we deployed as part of this. The trigger will actually invocate this because this subscribe and send the message to the service. Service gets invocated, and it will respond. So complex delivery with reply, you can have different kind of channels. So those familiar with Kafka already would know a concept of topics. Sometimes you want to multiply first. I want this event to not just put one service, but a number of service, all the services that are interested in a certain topics. So what you have is you have multiple channels. I'm going to send an event to a channel, and multiple services have taken a subscription. And anybody who has a subscription can just listen to that event. So these are the bunch of commands that you run, and if you follow the demo, you actually would see how the... You would see in the logs how creative and you say hello from Korn. This is your producer, and this is your consumer. You have reply. Like I said, it's a big demo. I definitely don't have time to show. But feel free to explore on the link is there. It's all open source, and you have all sorts of funky things with how you can integrate it with Vision API, how you can integrate it with your voice. So basically in this demo, I think you put an image of a cat and dog, and it identifies what it is. So translation, you can build translation, hello world, English to Spanish. You could do all sorts of things. And there are good examples that this and last, but not the least, I would encourage you to go back to Knative. It's completely open source. This is the GitHub repository. Eventing and serving are the two main repositories, but it has a whole bunch of them. So you could explore. Yeah, we have reached the end of allocated time, but you can take another few minutes, probably. Yeah, I'll move the questions brief, but if somebody has want to talk more, feel free to contact them. What is my view on open source serverless tools? It's hard to answer because there is no one tool. There are a bunch of tools. So your question is more specific. What is my view on a particular serverless tool open fast? I think open fast in general is a pretty interesting, pretty interesting paradigm. To my understanding, open fast comes from IBM, but I could be wrong. Yeah, and I mean, I don't know what my opinion is. I'm personally a big fan of Kubernetes. You know, I want everything Kubernetes-based, and I'm not sure if open fast is so much of Kubernetes space. I've never used open fast. So I don't actually know really what it is. To my understanding, it was basically IBM kind of open source. Yeah, so I don't know much about open fast. Thanks, Nikhil. That was a wonderful session. Thank you so much. Thank you.