 Hi everyone, I'm Rajasthri Mandav Ganesh and the title of my talk is Webhooks for Automated Updates. So just a little introduction. I graduated last May from NC State and then I moved to California to work as a software engineer. So it's been a year and a half of me working in container orchestration and Golang and I'm really liking it. I got introduced to Kubernetes through one of my projects at work just a few months back. So I still consider myself kind of new to it. And this is also my first ever talk at a tech conference. And at the end, I'll try to leave a few minutes for any questions you may have. And yeah, my Twitter handle is Rajasthri underscore 28 and my GitHub handle is mRajasthri. So I work at this very cool startup called Rancher Labs. Rancher is a complete container management platform. Some of you might have heard of it before and we are fully open source. So you can check out our code on GitHub at Rancher. So the goal of today's talk is to set up a continuous deployment pipeline to your Kubernetes cluster. So why do we need this? Now say you're running the V1 version of one of your apps on your cluster. Then you made some changes to your code and now it's time to update your app. So typically it would involve a few steps such as you would have to get your code pushed, merged. Once that is done, you will have to build a new version of your code. Once that build is successful, you will go ahead and then update your app in the cluster. So now if this process of updating your app is a frequent one, doing all these steps manually will get really time consuming. And we don't need to do that. If we can automate all these steps with the help of a pipeline that looks something like this. So this is how it will work. You do any code changes and push them to say GitHub. Now that should by itself trigger the build of a new image tag on some image registry. Then this image registry should be able to notify us with all the information of the newly built tag. Now usually such notifications about some events that have happened are handled by our webhooks. And so we need to use some image registry that has this webhooks feature. Then we need to develop some piece of code that can actually receive this webhook. Then retrieve the newly built tag from that webhook information. And then actually do an update to our app running in the k8s cluster. So this piece of code that receives the webhook and updates our app, I'm going to refer to as the webhook receiver for the rest of the session. And we'll see how we can use it. So before that, I just want to very quickly go over two Kubernetes resources that I'll be using for this demo. The first one is a pod. A pod is the smallest deployable unit in Kubernetes. And it consists of one or more containers that are tightly coupled. Each pod gets assigned a unique IP address that is shared by the containers within it. But pods are ephemeral, meaning if a pod goes down, or if the host it is scheduled on goes down, then k8s won't know that it has to reschedule it again. And that's why we don't run our applications directly on a pod in the cluster. We instead use a deployment, like the k8s deployment resource, to manage the life cycle of these pods that we want our app to run on. So what the deployment resource does is it takes as input the desired state of your app, which includes the image to be used for your app and the number of pods you want to run. And then it tries to make the actual state same as the desired. But along with the image and number of pods, you can also specify the manner in which you want to update your application using this deployment resource. Now, in DevOps, there are various strategies to update your app, and they are referred to as upgrade or deployment strategies. So we need to choose one of these strategies. And that's why we are going to take a look at some of them. The first one is blue-green strategy. Now, in this, if you want to update your app to a newer version, you have to bring up an identical production environment for the new version of your code. And once this new environment works as expected, you can just get rid of the older one. So this strategy guarantees zero downtime during the update, and it also provides a way to roll back. Let's see how. In this strategy, only one environment will be live and handling all the requests at any time. So at first, the blue environment is the live environment. Then we did some code changes. So it's time to update to the newer version. And that's why we bring up the green environment to run the new version. Now, once this green environment is fully deployed and tested, we can start using it by just configuring the load balancer to start sending requests to that instead. So now, we are running the new version. But if you come across any problems with this and would like to go back to the previous, the older version, you can just do that by making the blue environment as the live environment. But once all the problems are fixed and you see that the green environment, like the new code, is working just as expected, you can just get rid of the blue environment. So now, we finally have updated to the new version that was running on the green environment, and we did this with zero downtime. But there is some cost associated with doubling the resources for the environment, and that can be one disadvantage. The next strategy we'll look at is the recreate strategy. Now, in this, if you want to update your app to a newer version, any instances that are running the older version of your code need to be removed first, and only then can you create instances running the newer version. Now, this is because some apps, like for some apps, the old and new versions of the code cannot be run at the same time. One example can be if you're doing some data transformations for supporting the new version of your code, and this is how this strategy works. So at first, we have three instances, both running version V1, and now it's time to update to the version V2, new code. So all three instances undergo the update at the same time. So as you can see, there are no instances available to serve any request, which means during this update, there will be downtime incurred. And once the update is done, we go to version V2, and the instances are up and running again. So in this strategy, we didn't have to double any resources, so there was no extra resource utilization, but there was downtime incurred, and we don't usually want that. The third and the last strategy we'll look at is the rolling update strategy. Now even rolling update, just like the blue-green strategy, guarantees zero downtime, and it does so by only updating a certain percentage of instances at any given time. So always there are a few instances that are running the older version of your code. And yeah, for using this strategy, your app must be able to support the old and new versions of the code at the same time. But unlike the blue-green strategy, rolling update does not require any additional instances for zero downtime, because this is how it works. So at first we have two instances, both running version V1, and now it's time to update to V2. So the instance 2 undergoes the update first, and in the meanwhile, instance 1 is still up and running the old version of your code and serving all the requests. So once instance 2 is done updating to V2, instance 1 undergoes the update. And again we have instance 2 up and running this time, the new version of our code. Once the whole update is done, we have both the instances processing all requests and running version V2. So as you see, we have updated to V2 and we had no downtime during this. Also no extra resource utilization. So how do we use these strategies in our kth cluster? So the deployment resource that we looked at earlier, let's specify one of these strategies. Right now it has two options, one for rolling update and one for recreate. Now for my app, I want to choose the rolling update option. This is because not only does it guarantee zero downtime, but I just need to specify the rolling update option in my deployment resource spec. And Kubernetes will take care of the entire orchestration logic of that update. I won't have to do anything related to the implementation of that. So this is how I can specify the rolling update strategy. This is a sample manifest. I hope everyone at the end can see on the left of the screen. And over here in the spec field, spec is what accepts the desired state of your app. Under that you can see strategy and I have provided type as rolling update. And you can slightly manage this by these two fields, max unavailable and max search. So max unavailable, as the name suggests, is the maximum number of ports that are allowed to be unavailable during this update, which we want to be say one. And max search is the maximum number of ports that are allowed to be scheduled above the desired number of ports. So if the replicas or the desired number of ports is three and max search is one, we can have four ports scheduled during the update. So how do we trigger this rolling update for our app in the cluster? Now there are ways to do it. You can do it manually. You can run these commands. The first one is set image. So in kubectl set image, you just need to provide the new image that you want your app to update to. And that's it. It will update your app and it will use the rolling update strategy because that's what we have provided. And the second command is kubectl edit. It will open up your deployment resource in an editor and it will show you the internal representation of it. And you can go there and manually change your image and that's it. It will update the app. But again, running these commands manually when you want to update your app, we don't want that because you want to update, sorry, I mean automate our continuous deployment pipeline. So let's automate it. The web book receiver code that I spoke about earlier when I was showing the pipeline diagram, we are going to use that to automate this process as well. So let's go back to our pipeline diagram and get the different components in place. The image registry that I'm using is Docker Hub because Docker Hub has the automated builds feature, meaning I can integrate it with any of my code repositories and when some code is pushed, it will trigger the build of a new image tag. Docker Hub also has the web book's feature that is needed for our web book receiver code. So what exactly will this web book receiver do and where will it run? So the web book receiver will consume all the information that is sent to us by the Docker Hub web book. From that, it will find the image tag that was just pushed and then using this image tag, it will patch our case deployment resource via an API call. So, if you all can see the screen, yeah. So the Docker Hub web book call, it's an HTTP post request with a JSON payload that looks like this. So in our web book receiver code, we need to expose an API endpoint that can accept this post request and the JSON payload with it. Now from this, our web book receiver needs only two fields to know which image was pushed, image was built, I mean. So if you see, yeah, under push data, you can see the field tag. Under repository, you can see the field repo underscore name and these two fields combined give us the full image name along with the tag that was just built. And then, our web book receiver code will make a patch API request including this image in the patch body to the Kubernetes deployment resource. Now for making this patch request or any API request to our cluster, we can use any of the existing K8s API clients. There's one client for Golang, one for Python, so you can just use any that you want and so now where will we run this web book receiver code? It can be run anywhere, like you can have it running on, say, AWS Lambda or you can somehow run it within the cluster or you can create a separate microservice which can do this for you. But Rancher already has a framework for such web book receivers in place and it's a Go microservice, so it provides with web book callback URLs which when triggered, do some predefined actions. And the reason I'm going to use the Rancher's existing framework is that it runs out of the K8s cluster. So that means even if I have multiple K8s clusters running, I can create the callback URLs for each of them using the same microservice. So this is what the framework looks like. In the code, we are referring to the receivers as drivers and every receiver or driver that we add needs to implement the web book driver interface. Now these functions are just to make sure that the web book is fully functional. For example, Validate Payload will take as input in our case what deployment I want to update and execute will actually execute the API calls to update the deployment after it receives the Docker app web book. And yeah, so I have added the new driver for the deployment update and if you want you can just check out the code. I've given the link to that repository. So now let's go back to this diagram and see what we have using the web book. So first of all, the user will make a request to get a callback URL and our web book framework or the web book receiver will give this generated callback URL. The user will then add it to Docker Hub as a web book. Now going back to the continuous deployment once this initial setup is done, user will make some code changes and push them to GitHub. Now this, because of Docker Hub's automated build, will trigger the build of a new image tag. Then Docker Hub will notify us using the web book. The web book framework in response will trigger the update to our deployment and finally, our app gets updated. So now it's time for a demo. Yeah, so this is my Kate's cluster running. Let's go to the dashboard. Over here you can see I've already created a deployment called kube-r-update. I'll just show you what that is. Can everyone see the screen? All right, I think I'll just increase the... I'll just use the dash of flag to get a detailed description of it. Okay, so if you scroll up, the image that I've provided for my app is M registry cube and I'll show you what that app is doing. And I've exposed port 9001. Now under strategy, I've provided type as rolling update and max search is one, max unavailable is one, meaning at the most only one port will be unavailable and at the most four ports can be scheduled during the update. So now let me just show you the ports that are currently running for this deployment. You can see these are the three ports running for it and yeah, the status is running and they were all created some time back, like three hours back. So now when I actually update my app, new ports should have been created in their place and when we do the kube-kuddle-get-pots again, we'll see that they are quite new. So let me just show you what my app is doing along with the deployment. I'd also created a service to expose it. So my app is like a very simple Golang app. All it does is print some message and right now it's printing this message. This is an older release. So my goal is to like do that initial setup with webhooks and finally I'll just change this message in my editor and do a get push and that should by itself update my app in the case cluster. So for that, we need to set up the automated builds in Docker Hub. You can do that by just going to create an automated build. Now, you can either link your GitHub or Bitbucket account. I have already linked my GitHub account. Yeah, so over here you can choose any of your existing repositories and that's exactly what I've done to create this automated build mRagistry slash queue. This is the GitHub repository that runs the app that I just showed you over here. So now over here under build settings you can define a certain set of rules which decide how your image tags will be named. I'll just get back to these in a second and over here this is the webhooks tab. This is where you'll add your callback URL. So let me create that. So I'm going to provide the name and the namespace of my deployment that I want to update. And this environment ID it's nothing it's so within Rancher there are different environments for different clusters. So since I'm using the existing framework I'll be providing that environment ID to know which cluster I'm working on. So let me create it and although okay, so I'm in this environment if I go to the webhooks page it will show me that this is the webhook that I just created kxcubecon that's what I had named it and this is the trigger or like the callback URL within the code we call it trigger URL because it triggers an action. So I'm going to copy this and add it to Docker Hub as my webhook. So this v1 webhooks endpoint corresponds to that execute function that I've shown you in the code snippet that will actually trigger some action. Now that is done and yeah. Now this is my app I'm going to update the code. So let me just change this message to okay so I'm going to push my changes I'm going to create a tag for it okay. So the git push has taken place and if our pipeline is working correctly I won't need to do anything else for my app in the case cluster to get updated. So let's go to our Docker Hub automated build and under build details you can see that it has already started building a new tag. If I click on it it is using this docker file. Now this automated build is only possible if you are providing a docker file in your repository. So in this I'm just using the Golang alpine image it's relatively lighter and I'm copying my code and I'm building the code using the docker file itself because I don't want any compatibility mismatch issues like some OSX binary running on linux or so on. Also I'm going to run this run this command it just keeps doing it just keeps running get pots every 0.5 seconds the reason I'm doing this is I'm hoping we'll be able to see the rolling update take place like one pod getting terminated one pod getting created in its place and so on usually these updates take place very fast within a case so we I was thinking we might not be able to catch those actions using this command but we can so as you can see this pod which was running earlier it's getting terminated now and in its place we can see well two pods are in container creating state that's because we saw that four pods are allowed at the max to be scheduled so one of these pods is going to take the place of this LVWG7 pod and think this goes on for a while yeah right now we have four pods running and then two pods are getting terminated so this process will go on I'm just going to run my kubectl pods command again and as you can see all three previous pods have been deleted and their place has been taken by these three new pods all of them are running now and they have just been created a few seconds back so they are new pods with the new app so this is done we saw the rolling update actually take place one pod terminating one pod creating and to make sure that our app is actually updated I am just going to go to the browser and I am going to refresh this and I hope that it prints the new message that I had changed and it does so this is the new message that I had added hello kubectl people and so our app has been updated without me having to do anything I just had to do a git push and that's all that set of the pipeline so this pipeline is done and the demo is also done so thank you my twitter is registry underscore 28 and I am on github as well and if you have any questions yeah Jenkins has continuous it will build your new image and all and I think you can also like you were saying maybe add scripts to do that but I was using this web book receiver because you will need to specify what deployment you are dealing with you will need to specify your kubectl config maybe every time the job is being built and so an easier way to do this is if you are just running a microservice which already takes accepts your kubectl config you only have to do anything extra so you can use Jenkins for building your image and that get sorry yeah so yeah I am familiar with Jenkins like you can have automated builds and you can have webbooks as well they will notify you when something has happened but for the receiver part I was not sure if you can provide your kubectl config and like execute all those commands but yeah the idea is to just have this automated like deployment pipeline in place you can replace these components with anything so that's why at the beginning I had just shown some image registry and something to trigger that update so this is one way to do it if yeah so I meant that in the web book framework there were a few other frameworks that do similar actions but there might be other frameworks but the framework that I am using it's on github and maybe you can take a look at that so that works slide no like it's available on that but yeah I can put it on github as well I guess that oh yeah sorry so as I said it can work anywhere the reason I put it in the Ranchers framework is we already had that framework in place so it just became so well the right yeah so over here if I go to my cluster I have already set up access control and like well this cluster is running in Ranchers so it's using Ranchers access control but I believe with I am not so sure but I believe with like RBAC you will be able to handle that so I have just I have created API keys and while creating the web book I am passing this API key that have been created for my account yeah so Docker registries we are adding support for that but I believe it shouldn't be that difficult I like I haven't taken a look at it but yeah we were planning on working on that yeah so he was saying that they have been using Spinnaker for updating the deployment but it doesn't support insecure registries and if this can be used for that yeah yeah we do have testing stages now the thing is it's like a sample pipeline that I showed but you can modify it with like as I said replacing the components or adding new stages for adding testing and all so blue green deployment now yeah this is just using the built in Kubernetes resource that like I deployment resource and in the documentation I didn't see blue green as an option so yeah yeah I think I forgot to cover that so I was just going to go over this yeah so like I think yeah as you can see over your Docker itself tells us that this like just leaving it blank will target all tags yeah so the tag name is just prepended with release hyphen and as you can see over here the tag that I had pushed was oh the cubecon yeah and so it tagged it as yeah yeah it just uses that I guess that's it thank you