 Hi everyone, my name is Nikhil Berthwal and I'm going to be talking about Knative, which is a Kubernetes framework to manage serverless workloads. Now, the way I have structured this talk is I have some slides and I have some demo. I have about 15 minutes slot. So, half a little more than half I'll cover the slides and then I'll have the demo. The demo itself is quite big. So, I won't be able to cover all of it. I'll just be able to cover part of it but you have the link here. So, you're welcome to clone the repository and play on your own spare time. So, without much delay, let's get started. Okay, so let me start with introduction to Knative. What is Knative? As mentioned, Knative is a Kubernetes based open source building blocks for serverless. Now, what do we mean by that? Kubernetes is essentially, and I won't go into details of Kubernetes is a platform to build platforms, right? So, it's a great starting point but it's never the end goal and Knative is a serverless platform built on top of Kubernetes. And this is a simplified view. I'm actually going to give you a stack on how the technology stack is laid out this stuff in between the stuff on top and so on. And it provides you the basic building blocks for serverless. So, there's this repetitive stuff that you don't have to do again and again it's all available to you in a packaged form. And again, we'll see what those building blocks are in a short while. So, let's start with Kubernetes. Again, Kubernetes is a big topic, but basically in a very small nutshell what Kubernetes is, it's a descriptive way to describe your system state, right? So, you describe your system's desired state and Kubernetes is basically a platform that could basically manage to make sure that state is achieved and whenever it's deviated it restores that state. So, it's pretty popular and, you know, I could just give you talk for hours and hours on Kubernetes but I'll spare that for now. But essentially what Kubernetes does is it has a bunch of stuff that it does. It does scheduling, life scale, scaling, load balancing, such and such. So, it's handling all of those things underneath. Think of it as an operating system for your database or operating, sorry, not database, a data center or operating system for your cluster, right? You have a bunch of machines, you have pods, you have containers running on those pods. So, it handles all of that underlying stuff in it. And it's available for wide variety of platforms. So, when I say de facto, it's probably by far the most popular container orchestration system. You know, it has a huge ecosystem. You have the, it's fully open source. You have the basic Kubernetes and then you have these flavored Kubernetes. So, if you go to Azure, you have Azure Kubernetes service. Google has GKE, Google Kubernetes Engine. Amazon, I believe, has EKS, Elastic Kubernetes Service. So, they basically have their own little bit of vendor specific managed, or you can use the plain Manila Kubernetes. Like I said, it's open source. It's essentially a wide variety of platform. It's widely available on all major cloud platforms and you can have like on your own private data center, everything. So, it's kind of like a very popular de facto system to run your workloads. So, now that we've talked about Kubernetes, let's switch gears and let's talk a little bit about serverless model. Now, what do we mean by serverless? Okay, so let's actually define serverless. Now, in Kubernetes, there are two distinct categories of users, right? They're the operators or basically DevOps, SRE, or whatever you like to call them, who are handling the infrastructure. And then there are developers who are running the application or developing the application and running on top of it, right? So, let me differentiate these two categories and we'll talk about these two categories separately. So, we have the operational model of serverless and what is operational model of serverless? For operators, what Knative gives you, it gives you no-infra-management, right? So, it's automatically handled. Like I said, it's serverless. It autoscales all the nice properties of serverless. We'll talk about what those properties are. So, it gives you a way to handle your infrastructure, right? Manage security. Security is automatically managed for you, so you don't have to worry about it and by far the most common use, by far the best thing about serverless, you pay for the usage, you don't pay for the capacity, right? In a typical, my container-based service, you basically host a container even if nobody's using, you're still paying for it. So, you're paying actually for the capacity that you've allocated on the cloud and not actual usage where it's here, you actually pay for the usage and what that means is auto-scaling up and down. For you, we'll have some examples and depending on the time, I might have something on the demo also to show what that means. So, that's the programming model for it, right? The serverless is basically an event-driven platform. You have events every time a state of the system changes. It generates an event and, you know, you have these services. These services respond to these events and lastly, it's kind of portable. Now, here I want to mention a little bit about what, expand a little bit about what do I mean by portable. Now, most of the serverless, and this is a common criticism of serverless computing, is that you have a little bit of vendor locking. So, you know, if you're using AWS Lambda, you want to port that system to, say, Azure Functions or Google Cloud Functions, it's not like one-on-one simple, right? You might have to re-implement parts of it. The thing about serverless is that it's handling all the infrastructure automatically for you, which is great, but then understand the fact that because all that infrastructure is automatically handled for you, whatever programming you're doing on that serverless platform is tied very closely to that infrastructure or that underlying infrastructure. So, when you move from that cloud, when we move from AWS to Google Cloud or vice versa, because your underlying infrastructure has changed, right, your programming model actually kind of changes a little bit because you're tied for it, right? Google has its own way of doing things that Amazon has, and they're different platforms. So, one of the common criticisms of serverless is that it locks you into vendor, but the advantage of K-native, it doesn't. Now, it does lock you to Kubernetes, yes. So, it has to lock to something. So, it does lock you to Kubernetes, but it does not lock you to a particular vendor. The good thing about Kubernetes, like I mentioned, is that it's widely available on all major cloud platforms. So, you don't really have to worry about, oh my God, I'm going to move to this cloud. Do they have Kubernetes? Well, 99.999% they would. At least all major ones do have. So, once you're locked to Kubernetes, at least, and that's a de facto platform, pretty much available everywhere, you kind of have a system that is portable. And this is very unique about K-native. That is not true for AWS Lambda. That is not true for Azure Functions. So, that's why I wanted to expand a little bit about portability. Okay. So, moving on. So, we have talked about service-based systems. What is services? Where are they hosted? It turns out they're hosted in containers. Again, I don't want to talk too much about containers. It's a widely available technology. I believe most of you would be familiar with it anyway. But containers are basically in a nutshell, a way of packaging your code. It's kind of think of it as a mini virtual machine. You have base images, like you have images for ASP.NET, you have images for Python, you have a whole variety of base images available. And then you can just pretty much for any library, you have a Docker file, you would just build that container, package it, and you can deploy it. And the advantage of container is that it's self-contained system. Okay. And that's important because, especially for portability reasons. Now, what happens is, let's say I develop an application, I test it on my local machine, everything works. I deploy it to the cloud. It doesn't work. Why? Because your application is running on the environment. Right. And the environment changed. So there might be parts of it that are dependent on the environment. And when the environment changed, application behaved in a different way, often unexpected way. This is a problem, generally, a common problem. But the good thing about container is that it's a self-contained unit. Right. So if I develop locally, I develop a container, I test it, and then I put it in the cloud, the entire environment needed for that application goes with it. So it gives me a lot more confidence that whatever happens locally on my local testing, it's probably going to behave the same way in the cloud. And the second thing is that containers, and we'll talk about this in the next slide, are an industry standard. Right. There's a trend here that how the container usage is growing. Container is an industry standard. So like Kubernetes, practically every cloud would give you some way of running that container. So again, I'm not logged into a particular vendor. Right. I developed this application, I packaged in a container, I tested on my local machine. It works great. This application plus the environment, the entire thing goes to say Google Cloud, run in Google Cloud. For some reason, I decided I'm going to move to Amazon. Okay, I take this container, move it to Amazon Cloud, AWS, I'm taking to Azure. This whole self-contained unit, you can just treat it as one unit and you can take it anywhere and an entire runtime of your application is packaged with it, goes together. So you have no unexpected surprises. So that's the big thing about containers and it ties back to basically the portability aspect of serverless. Now here, the context of K&A Day when context of this talk, basically, when I say services, I'm basically talking about the services packaged in containers. Now the common terminology in serverless is functions, but just FYI, I'm actually going to be using functions and services interchangeably. So whenever you hear me saying function, services think of them as functions, right? Because serverless traditionally started as set of functions responding to events, but basically it's the same thing, set of services, set of functions. So what you basically have here is that you have these serverless containers, right? So instead of say, if you're using AWS Lambda, you have these functions, replace those functions with containers. You have these containers that are behaving in a serverless way. So they get auto-scaled, they switch up, switch down, scale up, scale down, whichever you want. If you're not using it, it's scaled down to zero or some minimum instances that you're running, we'll talk about that. Anyway, so now let's get back to K&A Day. We have talked about the underlying technologies, which is essentially Kubernetes and containers, the key things that we need for K&A Day. So K&A Day basically is set off now to previously three components. Serving, eventing and build. And if you notice, I'm actually going to cross the build. And the reason why I've crossed the build is because build was present in the earlier versions of K&A Day, it has now been duplicated and has been replaced by Tekton, which is a different project altogether. In a nutshell, serving is essentially a component that routes your traffic. It routes your traffic and handles all the routing. The scaling of your containers up and down, scale down to zero is all handled by serving component. Eventing is basically a framework for managing events. Because remember, it's an event-driven service. You have events that systems respond to. And build was essentially a way to deploy your code to these containers. Now, build has been duplicated with Tekton, which is a whole different project that I would not want to talk about because I don't want to digress through Tekton. But again, Tekton is another interesting open source project you could go to. It's completely open source. You can search around on the web. You'll find a lot of links to Tekton. And the code, essentially, it is a way, it is basically a serverless build system. Think of it that you have these builds done in the containers and pay as you go. You can see in the model that when I'm not using a scale down my agents. I mean, traditionally what happens with common CI systems like Jenkins is that you have like this Jenkins master and then you have these agents or slaves, right? And the slaves are running all the time, whether you use them or not. And that's a wastage of resources. So now what Jenkins X and Tekton and all these do is they say, okay, fine, we'll switch on the slaves when you need it and push them down. So kind of serverless kind of computing applied to CI CD build. That's what Tekton project is. It's coming back to Knative. Knative is all the ingredients for serverless computing and, you know, solves modern development patterns. Now what the way Knative started was it was basically an in sort of industry consortium between Google, Pivotal, Red Hat, SAP, IBM to start with. You have a lot of other partners, right? So all these, all these big companies, they came together. They kind of, they kind of had interesting learnings from their practical and all of that goes into Knative. So it's actually well tested. It is pretty awesome. We'll see in the demo how it works. This is a website Knative.dev. So let's see very briefly. I'll just go to the website. The Chrome browser is open. Knative.dev. So that's the website. And like I mentioned, it's an open source project so you can go to the GitHub and you have eventing, you have docs, you have the major components. Okay, so let's actually move back to the slides. So that's Knative. So what was the motivation for Knative? Essentially, when you look at, you know, there's always a little bit of tension between the operators and developers or between SREs and developers. And this exists everywhere. It's not specific to Knative. And the reason is for developers, the motivation is they want to build new features. They want to ship code. They want to have new capabilities. Great. SREs or operators basically want stability. So they don't want to churn too much of change because when you introduce too many changes, right, you are introducing, destabilizing the system a little bit. You're introducing the possibility of something breaking. So to keep the system stable, the operator would always want you to release code slowly and slowly in parts, you know, partial rollout, et cetera, whereas developer always wants to release the code as fast as possible and bring the new features. So there's always a natural conflict. And side stepping a little bit, if you look at literature and you read all the SRE books, they'll talk about like error budget and all those error budgets basically were created to kind of resolve this, this natural tension between SRE and developers, right, that you're allowed to downtime. So from an SRE point of view, that's the maximum downtime they'll tolerate. So developers, that's all you have to release the code because every time you release there is you automatically introduce some downtime because you're switching off one service and putting another service. It's a replacement. So that's kind of a battery. That's kind of like a border between what is tolerable and what is not tolerable. Now, coming back to Knative, you kind of have the same tension here. Developers want to code, right? They have their favorite language. Developers tend to be very, very picky about their language tools, frameworks, and IDs also. So they want to use their favorite languages and dependencies to code. They don't want to handle the infrastructure. That's what developers want. Operators on the other hand, you know, they don't want, they kind of don't want to deal with hardware. That's one of the reasons why they love Kubernetes. It's a great orchestrating platform. So they want, you know, they want everything to be done automatically by Kubernetes. And Kubernetes is not the right abstraction for developers, right? Developers don't want to deal with things like operations. So essentially Knative sits in between. So it's on top of Kubernetes. So it keeps the operator happy that you have the basic Kubernetes and the link. It keeps all the infrastructure. So it's on top, but it kind of abstracts out that Kubernetes for developers that you all you have to worry about is use those containers deployed. Like I said, containers are available for your language, for your dependencies, for all your whatever frameworks you use, it's commonly available. So you use them to deploy the service. Fantastic. So for Knative for developers, you write code, you don't have to worry about building the Docker image, uploading to the street deploying service, all the setup logging monitoring all the boring but repetitive tasks, you don't have to worry about it. That's Knative for developers. For an operator, what Knative brings is all the operational comes complexity is abstracted out. And that obviously is handled by Kubernetes because it's what Kubernetes does. And repeatedly, it's universally supported by all cloud providers. So it gives you a kind of a portability. And Kubernetes is a very extendable platform. I use that code Kubernetes is a platform to build platform. It's a great way. It's a great starting point, but never the end goal in itself. So it has an API, et cetera, with clear separation of concerns so you can build more platforms. So that's one of the reasons why developers love sorry operators love Kubernetes. So we have talked about portability aspect also. So Kubernetes is virtually offered by all cloud service providers and Knative essentially code device serverless on Kubernetes and it is a brad user community. It's a very popular open source paradigm, a lot of contributors, you can actually look at the gate upside and you'll see so now let's actually look at the stack now. I've been so far talking about, you know, platforms and so on and components where here was the detail. What you have is you have this. So you have these Kubernetes, which is the underlying platform. Okay. On top of it, you have these service measures that basically handles and routes of traffic. So you have Istio, which is the default one, which is the one we're going to use for demo glue ambassador. And then you have these primitives serving and eventing and build. I've actually crossed out, like I mentioned, it's deprecated. So that's where Knative stack and on top of Knative, you can have several other platforms. You can have Google Cloud Run, which is basically think of it as a managed, managed Knative offering. And they're both kind of very similar except one runs on Kubernetes cluster one runs on both, which is Google's internal cluster. So it kind of, you don't even have to worry about everything is automated. Everything is abstracted out red hat has its own product IBM has its own product. So So, talking specifically about Google Cloud, you have basically three different offerings around Knative and they're all compatible with each other. So you have Knative, which is open source. Then you have Cloud Run and Cloud Run on GK. This has been renamed as Cloud Run for Anthos. So Cloud Run for Anthos. And Cloud Run is essentially a fully managed serverless container offering. So that runs on Google internal cluster internal. They call it Borg, which is basically their internal. Borg came from Kubernetes, right? Kubernetes is in some sense open source version for Borg. Cloud Run on GKE, a lot of customers, they already have their Kubernetes clusters running. So they don't want to use a completely different product. So they want something that's built on top. So you have Cloud Run on GKE or Cloud Run for Anthos and then you have Knative. And they all sort of use the same offering. And what that means is I can take a code that is running on Knative and take it at run it on Cloud Run or move it for Cloud Run on Anthos seamlessly. And what that gives you is a possibility to do hybrid cloud. The hybrid cloud is very popular in the industry, right? So with all these different offerings, you truly get the possibility of a hybrid cloud where I can run part of my workload in my data center, part of my run out in cloud, and I can move the workload seamlessly across. So let's actually talk about Knative serving. Now, some of things might not make too much of sense right now, but when I get into demo and I'll show things how it works, then a lot of things will become clear. So let's talk about serving. What is serving? Serving is basically a way to handle your traffic. So what it does is it has all this auto scaling. Default auto scaling is 0 to N or 0 to infinity if you were to put it that way. This is the default. You can actually change it. I can have something like minimum instances that, you know, I want to have at least three instances because one of the common problems with serverless is a cold start problem. So you sometimes to avoid cold start, you certainly want to have certain instances almost always running, right? Of course, it comes with a drawback that you're paying for them, but that's a choice you have to make. So it handles basically the serving. It's a rapid deployment of serverless containers. It handles configuration and revision management. Every time you basically deploy new version, you create a new revision. It's like an immutable infrastructure, right? And it has its own configuration and you can even split traffic. What that gives you is a possibility to do something like a gradual rollout, right? So you want to release a new product to 10% of your customers if everything works fine, then we'll just increase it to 20, 30, and so on. The way Knated is designed is it's deliberately designed as a set of loose components, right? Loosely coupled components. It's not designed as one system. And the reason is everybody's requirement is very different. So being very open system, you have all of these components and you can mix and match any way you want. You can plug in and plug out certain parts. For example, I mentioned autoscaler can be tuned or swap for custom code. Demo has some parts depending on time. I may or may not show it. You can connect to your own logging and monitoring. So everything is very pluggable. What you have is set a bunch of components and you pick the components you want to use and components that you don't want to use. Anyway, so it's basically a high level abstraction for your application, right? So you have configuration and then you have each deployment is essentially a revision. So record in history, you have multiple revisions and you have your service deployed in these containers and then you can route traffic default whenever you deploy a new revision, 100% goes to the latest revision. But again, this can be changed. And again, I'll try to show all this in the demo and it makes a little more sense. But essentially you can actually split the traffic between. So now let's talk about eventing. So let's go with eventing. What is eventing? Kind of the same, eventing is basically a framework to handle events. Now we talked about events, loosely coupled events, right? Because the whole serverless computing is essentially responding to events, right? So you have an architecture. And what eventing does is it declaratively binds between event producers and candidate services. So you have a producer, you have a channel, you have services, auto scaling, and it has a lot of these event sources. It has event sources for GCP pub source. It has event sources, which is basically a messaging for GCP, like kind of Kafka for GCP. You also have Kafka Kubernetes event source. In fact, if I go to the website, you would have a complete list of all of these event sources. So there's a whole lot of common event source. And of course you can write your own event source. So nothing stops you from doing that. But let's see if we can get that list. So connective, k-native, k-native, eventing component, so eventing functionality. Yeah. So essentially you have event source documentation. Wonderful. That's what I was looking for. So there's an incomplete and always changing list of different event sources. You have AWS SQL, Apache Camel, whole bunch of stuff. Most likely this should suffice. If not, you're always welcome to write your own. And it's always great to open source it to help other people. Anyway, so k-native, eventing. Eventing framework. As I mentioned, source, yeah, source, they publish, there's a broker, there's a trigger, and the source is subscribed to it. Again, we'll see depending on time, I would be able to show some parts of eventing framework so it's a little bit more clear, but that's basically how eventing works. We have already seen the list of sources. So here again, this is an always evolving list of sources. Eventing use cases. Why would you use eventing? You can have a lot of stuff like you can have a cron job importer to run weekly reports. You can have IoT is becoming very popular. What's up is a messaging framework. You can connect to almost any Google cloud or for that matter if you're using AWS, you can have AWS as an SQS, which is somewhat the same and connect to different AWS services. So there are a lot of use cases. Now let's actually move to build. I will not go into details of this for the reason that this is deprecated. So I'll be very, very brief. Before point eight, we had K-native build. We replaced it with tecton. So it was basically a way to go from your source code to container images. But like I said, it's deprecated. It had its templates, service accounts on to handle all that building and deployment. It has been replaced by tecton. Tecton by itself is a big project. So I would not want to go into details of tecton. So but essentially it has tasks, task runs. It has its own way off. It's basically a Kubernetes base CICD platform. Anyway, so that's tecton. K-native community. It's a big community. It releases every now and then. You could see all these details at k-native.dev. A lot of value for K-native, one step deploy, auto scale, manage workloads. Now let's actually, at this point, I would want to switch to demo. Okay. So let's go for demo. And what I have here is, so I have this demo at Nikhil Barthwaal GitHub link, right? So the link is mentioned right on top. So if you go through my slides, this is the link. I'll be happy to share the slide, but this is the link for the demo. The demo is like a big full day workshop kind of demo. So I would not obviously go into all of it. I would only cover parts of it. In fact, mostly I'll cover the serving because that kind of demonstrates. The slides are also present. And there are setup instructions of how you basically set up. So if we can just very briefly talk about setups, there are some scripts and so on. You can actually go into scripts, create the GKE cluster. So it creates all those clusters for you. In the interest of time, I would not do the setup part, because if I do the setup part, it takes couple of minutes and I don't have that much of time. I want to move into other parts. In the interest of time, I would actually skip the setup part. And here I already have basically I already have things set up. So now we go to Knative. We have Knative and let's actually go to a demo, first demo. So we would go to this website and we would see where is it? Let's go back. Let's start with Hello World serving. Okay, how much time do I have? I have about 15 more minutes. So I can cover a few things here. So let's actually look at a Hello World project. So I would have a Hello World code and I have C sharp and Python, but I think Python is probably a better place to demo because everybody understands Python. So let me actually look at my Python code. Hello World, Python. Very simple. I have, I'm using Flask, which is basically a micro framework for, it's a very simple thing. You have Hello World. I define a variable target and it just says Hello World target. Very simple kind of typical Hello World. My Docker file, you know, basic stuff. I take this image, Python 3.7. As I mentioned, you create these base images, you install the framework, and then you have this app and then you just start this app. So what I'm now going to do is I am actually going to build this. Okay, so to build, I have created my project on Google Cloud and you can see Nikhil Barthwaal dash K native to let me actually zoom a little bit. So it becomes more clear. Okay, good. I think this zoom level is good. So now I have this project Nikhil Barthwaal dash K native I have a I'm actually running on virtual machine but you don't have to you can even run your local machine or. You can even run your local machine. And what I have is I would now build this project. So how do I build it? Well, I use simple Docker build. GCR is my Google container registry. So let's go to GCR. Okay, where is GCR container cloud build container registry. Okay. And in a registry, obviously I've done a lot of builds before. So this is an old one I did like almost a month ago. Let me just delete this image and I will start fresh. Okay. So right now I have no images. So I'm going to build it. That's the command for build. Okay. I build it here. I say. So the project ID would be different, right? So if you have your own and this has nothing to do with Google cloud if I can use an Azure container registry or any other container registry you have. That's okay. I'm not the most of it is just open source. So it is not it. I'm just using Google cloud as an example. But it's, it's vendor agnostic. So it creates just just. Let me just list. That's just a little tag. Why is it not showing anyway. Let's actually now this image to the container registry. So it's built stacked. You're going to have permission you may have installed authenticate. Okay, I think I have some problem without I just created a fresh machine. My apologies that I have to deal with this problem. Authentication method on on the next. Sorry. Hopefully this solves the problem. I think this is fine. Let's see if this works. You don't have this you may have installed to authenticate your request follow the steps in this one. Okay, I don't know why it's to log into. I think this is what you need. Okay, so login is this one. Okay, I think it's, I don't know why it's not working. So I'll probably walk over the demo. I think I have some setup problem here and how much time do I have I have about 10 ish minutes so I won't even be able to really last attempt. Okay, works lucky me. All right. Let's refresh and you should see an image here. So it's loading up. There you go. I just built an image. It was created 36 minutes ago, and I just uploaded it right now. Wonderful. So I have an image now. And what I'm actually going to do is I have this file service. Let's go down. Okay, I think my screen is still. You're sharing this screen. So that's good. So yeah, accidentally. Okay, so let's actually look at service V1. What what is this YAML file. And what service V1 says is this is a hello world. It's actually going into. Okay, so this is the service V1. It's actually going to my container registry it deploys and it, it defines a, it defines this. Variable called target. Okay, now I'm actually going to apply this. So let's actually apply this. Hide this. Okay. That's good. It's coming between. Okay, the service has been deployed. Okay, so now it's actually let's look at. There you go. Hello world last created deploying it's still deploying. It's container creating creating container seven seconds ago. It takes a little while to deploy. I'm not going to worry about this one. Let's actually look at the studio part of it. Okay, the gateway is there. And what I'm actually going to do is I'm just going to export the external IT. So let's actually look at the ingress gateway. Okay. And now I have essentially let me call. So we had, we had this. Hello V1 and let's again look at what we did. Okay. We had this. Oh, sorry. So we had this target variable defined V1. And if you remember in our code, it was like hello target. So now I have a service running. And I can just, I got the IP address. And now I just ping it. It says, hello V1. So first of all, good. We have this basic service setup. And now what we're going to do is we would change a little bit of configuration here. So what we would do is we will another service. So here interesting thing that I should show is, let's actually look at the parts running. Okay. So this is the Istio stuff running. Yeah. This is basically running. Now, if I wait for a little bit, you see the status here that I'm highlighting this is running, it actually would stop. And the reason why it would stop is because, hey, it's serverless. So we just pinged. So it created, we just deployed, it created this container we ping it was running. It waits a little while when it doesn't get any request, it actually scales down to zero. Again, let's see if it scales down to zero it takes a little bit like it will wait a little bit. Hold on. Let's look at it again. There you go. Now you see, it was running. When I sent a request. I waited a little bit. It didn't get any more requests. Now it says status terminating. It's now auto scaling. So this is where the serverless computing is taking place right was running it didn't get any different it auto scale down to zero. And now it's terminating. Meanwhile, what we are going to do is we're going to deploy a new revision. So let's actually look at the two. In the first revision, we had we had service V1 where the variable target variable was defined as V1. Now we're going to deploy another revision of the same service. But now we changed the target. So let's actually see. So now first we ping it just. It's a hello world V1. Now you see a little bit of a delay. That's the cold start problem, right? The container was terminated. Then it switched on. Now it's running. Now I'm going to deploy. Okay, configured. Wonderful. Now let's actually have a look at what's happening here. Why can't I hide it or at least at the very minimum. I'll just try to the very minimum. I'll try to take it out of my way. Okay. So now status is running. It has two parts, right? Two versions of hello world. Hello world V2. So it was hello world V1. Now I deployed the new revision. Right. Remember the revision. You had new revision V2. It deployed a new version. Now it's talking to the new version. By default, every time you deploy a new revision, you have 100% of traffic deployed to the new revision. Okay. Now what we are going to do is we would basically change a little bit of a little bit of or actually let me. Let me actually show traffic splitting. I think traffic splitting is kind of interesting. Okay. So now what I would have is I would split the traffic and I'm going to. Okay. So let's actually look at this traffic splitting service V1 print. Let's see what it is. So you have the same hello world example you have value target and you have traffic current hello world V1 and the latest division zero. So now I have 100% zero percent. I have to redo. I messed up with my project change. So let's ping it now. It should say V2 because this was the V2 version running. Okay. Now I go about. So I deployed a new one. But the latest traffic is zero. The current is 100%. So it's still showing V1. Sorry V2 because that's the one we kind of deployed. Okay. Now it should now it has changed. Good. So see current is V1 and that's 100% of the traffic. So it was V2. Then we changed it back to V1. Now what we are going to do now is we are actually going to. Again, let me look at the time. Okay. So I don't have a lot of time. So I'll just take a little bit of shortcuts here. What I'm going to do is I'm going to do a 50 50 traffic split. So let me actually look at V1 and V4 service V4 dot YAML. So this is service V4 dot YAML. Again, I need to change it. Okay. So you V1. Okay. So far, good. Alrighty. Give it a few minutes. Give it like 30 seconds. I think 30 seconds should be sufficient. Again, traffic is 100%. So far so good. Now let's do the split. So V1 V4. Okay. We have deployed V4 but we are still directing traffic to V1. That's why you're seeing V1. Now what we are going to do is we actually would do a split. So see current revision name V1 is 50% candidate V4. That's the revision name. We tag it and say 50 50. So now what we are going to do is we will split the traffic 50 50. Okay. And we would apply this file. Okay. The reason I made a mistake was because I had misspelled my container my apologies. So we have done a split. Okay. I'll give it a little bit of time called start problem V4 V4. Okay. We'll give it a little bit of time or maybe I can redeploy it configured. Let's see what are the ports that are running. We should have plenty of them. Okay. So V1 is being terminated V4 is now running configuration V4. Okay. So we have V1 and we have V4 different revisions. Let's see what happens now. Okay. Why didn't it work earlier? Oh, because it probably was in the process of deployment. Same command runs. Okay. It was the container was terminating but now have a look at what I'm saying. I run the same command and I see sometimes it says V4 and sometimes it says V1. And the reason is because of traffic split. What I did was I split the traffic 50 50 and 50% of traffic would go to V1 50% of traffic would go to V4. So statistically, if I was to run this command 100 times, I would see 50 times V1 and 50 times V4. Statistically, of course, the actual numbers would vary a little bit, but sure. So that's an example of let me actually run this command and what I would do is run it 100 times. There you go. So here you see the traffic splitting. Now, I have very little time remaining. So I would not go into details of this one. Let me just close this. Let's get back to demo. The demo is big. Obviously, I don't have time to show all of it. So I'll just walk over some parts. So here I'm showing traffic splitting auto scaling. You can scale like I said, you can run the auto scaler. You can have minimum instances that you don't want to avoid cold problems. So I'll say, okay, let me have one instance. You run gipc eventing world is an eventing framework, how you create events you have different. Let's actually look at a bit more complex example. So here's an complex example where you have source channels and different subscriptions to different services. So you apply these channels, you have an in memory channel that I'm using. I have a cross service source that just pings. So if you look at the eventing framework, I actually use a cron source event generator. Where is it? Yeah, so Canadian eventing actually look at. Yeah, look at the talks eventing. There you go. Simple, simple, simple framework service. This is the service. And this is the source. So you have a cron ping cron basically cron generator and you have a scheduled running. So yeah, so like I said, please feel free to play with the demo. I have my, I'll just wind up by giving you the link. There's a Slack channel. This is my Twitter handle. And you have my contact information, my email, my webpage. If you have any questions now, please ask. If later on you have my contact information, please feel free to connect with me anytime. Thank you very much. And I hope this was useful. Thank you.