 Good morning. No energy today. Good morning everyone. Okay, how is it going? I think we are having a lot of technical sessions and in pipeline also there are a few more that you can enjoy. Okay, now coming to my session I'm going to talk about like this subject says how you can enhance your Node.js application with using Kubernetes and CI CD. So I'm not going to you know going a theoretical context. I will share a case study with you. So what was the situation with the customer and what's the problem statement was that and what's the basic requirement for the customer side and how we overcome that and what were the challenges that we faced during that. Okay, okay. So the case study that I'm going to share with you today is we had a customer. We had a customer who was in sector of providing website services for the conferences. So they were a big organizer for the conferences related to the medical sector. Again, they were worldwide and on an average they were they were handling 100 conferences per year. So in initial phase it's the big number and when they were targeting a bit more and they were growing at large scale. So the issue that they were facing because each and every conference is that they were going to host. So all the you know the person or the unit for which they are going to organize the conference. So they had a specific set of requirement like they want to say okay the conference that you are going to organize for us. This specific website should be designed for us. So all the core feature will be same but you I need to get change or someone will say okay payment gateway need to get change and or some other one will say okay we need to have this specific functionality to be part of your website. So when you are going to organize our event. So there were a lot of you know changes that were coming at all bases that they had to handle. So at high level the issue was they were growing at large scale. Another they need to be agile for for the core change and the request that is coming from the customer side. And they had a team of 12 developer who used to handle all the stuff for them. So all the stuff I mean to say they they were responsible for the development. They were responsible for the testing and they were responsible for the all the managing the operational stuff when your code is in the production environment. So they were having high customer demands in which you know because the requirement get change at all bases. So they all the changes they had to do in code in their code that had to be reliable. It's not like they are making some change and their production environment got break. Coming back to the actual requirement. So they were hoping that whenever they are going to host any conference all the requirement all the feature that specific customer is looking for that need to be hosted on a separate and then so they will they will be having all the features request. To be hosted in a as a separate entity so that there is clear segregation between all the customers and another requirement they were having for example if I need to host a new customer today it should be quite easy to do. So there should be less time to onboard any new customer for them. And apart from this overall time that need to you know need to take in consideration when you are having any code change in between for any set of requirement. So that should be hosted quite frequently. So they should it should automatically do all the testing stuff finally deployment and do monitoring. So that should be end to end automated. Apart from this if there is any kind of issue in the production environment my system should be capable enough to get it rolled back as quickly as possible with the desired state that I say OK just roll back to this specific version of my code. And apart from this they need to have a system through which they can monitor health of they're all the tendons that they are having. So in the sense it should be end to end like how my testing is done. What were the phases of the testing. How many in between environment that I had deployed this code. What was the state of that. And finally in production how my code is performing and how it's a performance in the sense your CPU memory and apart from this if there is specifically logging requirement it should be capable to handle all this stuff. So they need to have visibility around that. And apart from this as we talk about they need to they wanted to have less overhead in terms of time and complexity to deploy any tenant. And apart from this they because they were thinking of you know micro service been moving from monolith architecture to micro service based architecture. So what they were hoping if they can containerize their application and host it. So they wanted to have some services around the docker container strategy. And also as you can see all the portfolio that they are having it was related to the open source like docker they had they had they want to use the GitHub. And also they were going for the MongoDB. So they want to leverage all these existing services and they wanted to have management tool for their deployment like if they can leverage CI CD. So this was their desired state. So when we talked to customer they were saying okay although we need to grow our business as soon as possible but all the basic requirement that we are using existing tool all the open source we need to keep these as well as let's make something which is scalable enough and which is you know fully automated and how fast we can you know grow our application in customer base. So that was as a you know brief of the requirement. So then we talked to customer. So again it's a business that we are not going to change because business need to grow. If business need to grow we can't change you know anything around business. So one thing that we could change is around the code. So all the engineering practices that we were doing around our code. So we had our Node.js application and we had our repo for Git and we had you know our back end as MongoDB. So what we could do around all these services so that it is you know we can implement all the automation so that we can deploy it as fast as possible and with less error rate. So what we did so we explore that it is two segment. One is we are having our developer team. Another is we are having operations team. So these were two core pillar of overall architecture and overall the people management team of the customer site. So we identify if we need to do some automation around the development work and what if we need to do some motivation around the operations work. So we come up with this DevOps automation that we need to do. So if I talk about developers so it was like first so planning will be done with your whatever tool they were using. Another is how they can work around the code. Ad customer has already suggested that they are going to use GitHub for that. So they are going to have GitHub as an repo for them and they are going to commit all the code there with different branches based on the feature based on the customer requirement and segregation based on the customer or you can say tendent that they had. So once they are done with the code so they need to move this code to the build phase. So for building they were using Maven. So we draw this architecture and identify which are the technology which is currently in the market and which is mostly used and which is already existing in the customer environment. Whether we can leverage the same or we can we need to have something more around it. So at high level we identify for code we already have a gate and for build we have Maven and for testing they are using a specific format and for DevOps side we also had like for deployment they said we need to have Docker. But now they had you know identify we need to do some CICD automation and they want to use Docker. But how they are going to leverage all the Docker related benefit. Then we work with them whether they are open for any open source or they want to again go for the property. Then they said okay we want to explore the open source. And the best open source technology that we found for managing their you know Docker container services that was the Kubernetes. And when we designed identify all the tools and technology for the customer then we say okay for Kubernetes specific we are done with our coding part how we are going to handle our code testing and deployment with CICD. Now how we are going to manage their Docker. So we come up with a Kubernetes CICD. So in Kubernetes CICD we identify few segments that we had to you know work on. So like if so sometime one happened one what happens when you have you know start selling of your tickets your peak or your memory or CPU utilization will be at at higher end. So your application should be auto scalable enough right. So and sometime when all the selling is done it should be auto sync so that there is no additional overhead in terms of cost to the customer. So this was the core feature that my application should be auto scalable enough and then squeeze as required. So that there is no additional cost. Another is what if one of my application instance goes down. Is it capable enough to boot it up again. So this was another feature that we were exploring how we can do this with the Kubernetes. Another main was can we you know have our overall infrastructure as a code. So that in future if I need to add one more node to you know balance the load. Can I do this dynamically or with the coding part because if I can write the code I can automate it right. So those are the key highlighted which we identified to have any Kubernetes CICD pipeline. These are the core feature that we had to do. Then finally we came up with this particular architecture. So one is how our developer is going to operate and then how we are going to deliver the solution and then how we are going to operate it. So at high level what we did. So we had the Git as source code repository. We leverage Azure DevOps for that. So you can use like Jenkins or another solution that whatever your choice. So we did automation around it. So once your code is getting committed to GitHub my DevOps pipeline should automatically identify okay my new code has been committed and based on the configuration it should identify this code is related to which particular customer and it should build all the infrastructure based on that. So it should automatically build my infrastructure as a code and after I have my code ready it should package it in Docker and then host it on Kubernetes environment. And finally let's move it to production. And for infrastructure as a code automation initially we started with the ARM template because we were using most of the services on Azure like Docker, Kubernetes. Later on because you know customer need to be portable enough so that if currently they are using Azure tomorrow they can move to any other infrastructure like any other cloud player right. So we build their infrastructure as a code on Terraform so that it's quite easy to port from one cloud provider to another cloud provider. And for finally observability and clarity on the what's happening behind the screen what's the performance we use as your monitor for that and we had provided a plugin so that in future they want to you know switch to any other monitoring tool or they want to have L cost some other stack or within Kubernetes if they want to go for you know Prometheus, Gryphana they can easily switch for that. So that is high-level architecture that we defined for them and finally let me show you a quick demo or the pipelines that we designed for them. So this is my content is integration pipeline in which you can see we have multiple steps first it is going to prepare the job and then Helm is a package that is used in the Kubernetes to do automatic you know resource provisioning and all within Kubernetes. So we did this and then we check out the code and we created a Kubernetes as your container registry we build the Docker image we push the Docker image. So all these steps that is required to host any image on cloud we did in continuous integration part as of CI pipeline. So once I have my CI pipeline in CI pipeline I was able to push code build the image and final Docker image was in my as your container registry. So as an output I have a Docker image now as a next step I need to host this Docker image to Kubernetes cluster. So for that we had Kubernetes deployment pipeline. So I'll show you this is again my build pipeline. So this is you know at high level how it was happening behind the screen. So my CI pipeline actually create a VM that I have specified it need to be on Ubuntu. On Ubuntu it will first build or install all the required packages. For example if I need to create a Kubernetes cluster with Azure CLI it will install the Azure CLI and then I say okay you need to run kubectl command for Kubernetes. So all the processing will be happening on this particular Ubuntu machine and on then you can see it was installing the Helm package and copying the ARM. ARM template as your template that was replaced by the Terraform later on. So after we done with CI pipeline then we proceed with the CD how we can have continuous delivery around that. So this was my continuous delivery pipeline. So in which you can say for a testing we hosted our application only on the development environment and these were the high level steps. So for that what we did we had our application code as a container image as a Docker image and then we had our infrastructure code. So that is infrastructure as a code. So that was like as a first step for which I had to create a Kubernetes cluster. So that's the one step. So once I have set up my Kubernetes cluster now I need to extract the resource information like which image I need to host on my Kubernetes cluster. What's the parameter to scale it up scale it down or blue green green kind of deployment. Blue green deployment is like so when you are going to release a new patch first apply it only for the 25 percent of the load. So if it is successful on 25 percent of load then let it roll out for the 100 percent load so that there is no failure. Okay so that we did and after this we installed all the helm package and upgraded it. So that was our continuous deployment pipeline. So there you can see so all the best script that we had for authentication like it should be in secure environment we had service principle for that. On these scripts we provided okay it's not like open anyone anyone can run it they need to have a service principle so that they have proper authentication authorization for that. So we provided all the information in the inline script that you could found. So once we are done with this service principle stuff then on later on we applied all the changes and in inline configuration that you can see this Kubernetes tiller environment setup we had actual YML files. So in Kubernetes we have YML files through which we deploy our application and we deploy services according to that. So we configure all the things in the inline configuration. So once we're done with the inline configuration our next step was to get it run on and if required we could also have test cases that we wrote and now I'll show you the repository so this is all about CICD pipeline. So this is my repository. In there you can see at high level we had application and we have ARM template. ARM template is your infrastructure as a code and application is your actual source code. In this you can see there there is a docker file. In docker file I was creating my docker image for my Node.js application and I was saying this need to get exposed on 8080 and how to get it started with NPM start. So this was initial stage I told earlier when we move further we made a lot of changes in this code. So the final changes that we hosted was like this. So there you can see we have as your pipeline.tml. So what we did instead of keeping everything hard coded in the CICD pipeline so in future if you need to add another step in your CI pipeline or CD pipeline what you could do. So we created this this ml file and submitted this ml file in our Github repo only. So now my CI CD pipeline will pick this ml. So instead of running the same same CI pipeline CD pipeline again and again it will first check this. So do we have any change in our CI CD pipeline? If yes it will pick it from there and then run it. So that we did apart from this we created our AKS cluster with the Terraform. So this is the sample Terraform. So through which we created all the required resources that was part of the project. And again this is part of Github. So what was happening if there is any change in the resources also based on the Git commit my CI CD pipeline my Kubernetes cluster will get to know with the GitOps that yeah there is some change in the code. Then it will you know based on the modification it will trigger all the CI CD pipeline and hosting again. So that was another set of requirements that we handle. So once we're done with all these you know CI CD and monitoring part. So this was the final output. Sorry this was the final output. So we reduce actually the provisioning time which customer had earlier while like one plus week to 2.5 hours because everything was automated. Someone just need to have a check whether everything is successful or not. Although we had configured the notification of that. And once we're done with that then for the security we had used with Kubernetes service we has used keywords. So all the secrets password there RSA key is going to be stored there. There is no risk related to security. And overall because we assume that if resource consumption is going to be low then it should be capable to scale down. So we configured everything in the ML Kubernetes ML. So overall we saved around 33% of their run cost. And again for automation we used ARM template and later on it was Terraform template. So this is overall architecture that we did but for Node.js side what we as a best practices what we used was. So like in GitHub when you are not going to have you don't want to have something part of your project you just use gitigno. In same way because in testing phase we have some dummy data there are some test cases that is going to run but we don't want that data to be part of our final production right. So we use npmignore for that. And also there was a when are you when you are going to use this npmignore you need to be conscious about which is going to be take precedence whether it use a gitignore or npmignore. So all the consideration that you need to take care around that. So if you are having gitignore in your code then although npmignore will take first precedence but if it is there is no npmignore then it will be gitignore. That will be part of it. So this is few best practices and apart from this there was one more that is because everything that you are going to that they were building for the customer it was virgin based managed. Like if they are having 10 conferences running so they had 10 different tendent. For each tendent they have different code base. So for maintaining that code base they had specific virgining attached for that. So when you are running your CICD pipeline by default it provide you the auto generation of the virgin code. But what do you need to do when you have your npm version configured through the command. Same thing you can configure in your CICD pipeline otherwise it will it is going to your every time it will generate a new number and you cannot correlate this new number is related to which customer and which build. So you need to come up with your own virgining scheme specific to the requirement and apart from this once you are done with your code always try to have a consolidated and push this to origin one. So these were few best practices around the node code related things and this how we solve the customer issue. I think now I can open the place for the question. Any question around this. So how was your development environment? So are you using Kubernetes in your development environment? So for Kubernetes for development environment we had the normal developer machine but for you know testing of their code they had Minikube. Okay Minikube. Yeah. Okay because we are using the Minikube for the development environment other than the Kubernetes so that's why. Yeah that's fine because in the end underling library is the same. So if your everything is running on Minikube then you are good to go for actual Kubernetes cluster environments. Okay thank you. In my architecture I had like four slaves running and there was a master server also set up. Okay. For me to access APIs because my the client the API that I was using was restricting me to 100 requests per minute. So I was forced to use an architecture which had four slaves and there was a master server that was running and we also had a lot of problems when we're deploying it in the dockerized environment. We were not actually able to set it up even now. We are still struggling with it and is there any way that you can handle that? Sorry you mean to say you had master and slave for Kubernetes? No I was working in a normal I was working in AWS. So I was doing this in an AWS environment so we had like four slaves running and they were thinking we had to send requests from different IP address basically to the API. So we had four slaves running and there was a master server that is there and then we were doing microservice to get the data from the master server. So in such an environment we had issues deploying it using docker. Like when we were trying to dockerize it we had issues where the servers would crash out of nowhere and then our preview server would also go down and we were not even able to track the whole thing because I don't know why but the server log also didn't log in. Is there anything that handles where there are multiple servers coming into play? For us this was not an issue reason being we have you know when we set up our Kubernetes in AKS at least I'm sorry I'm not sure about the ADS how does it handle. In Azure it automatically your master server and end user need not to bother about which is master and how it is going to manage the server slave servers but for all the API request that is coming in. So we have Azure monitor and it is having plug-in to get it to configure with Prometheus and Gryphana so that you can have all the traceability so from where request was coming in you can configure your custom metrics around that for monitoring. So this was one way and to route the request internally so as I said load balancer and another we have API management that we used internally. So I think in similar way you must be having something in AWS that can you know route the request based on the requirement but there should not be restriction like 100 API request per second or something. That was put forward by the provider not by us. Okay not us not us. Okay so if you need to apply restriction that is customizable I think in across the industry you can restrict the like because they can be hacking or security risk so they always provide some restriction okay you can restrict your request like 100 request per second it should not exceed they provide but this is not an upper limit this is custom limit that you can apply. Thank you.