 Hello and good afternoon everybody. Thank you for coming to my session really appreciate it on a Tuesday the second Five o'clock. I'm sure everybody's tired and want to go to the happy hour or the event that is happening But I promise you this is going to be interesting. I know you heard about Kubernetes from many speakers today and it's not going to be the same boring presentation So my presentation when I practice takes about 25 30 minutes Which I'm not gonna do that torture to you and me I have a demo that should be about 20 minutes. I'll be more interesting But just to put things into Perspective and contacts. Let's just go through a few of the slides. So my name is said Agrawal I'm a expert application engineer also known as senior principal software engineer with discover financial services And I see a lot of my folks here. So guys. Hello. Thank you for coming. So let's get into it So the agenda is pretty tight and I know I have 40 minutes. So without any further ado, let's get into it Okay so For the agenda, which is going to do a quick introduction to Kubernetes There are two slides on the architecture which is ships analogy I think I'm going to skip that because that really gets into the details but reach out to me I can share the notes But instead of that, I'll do a summarized version Then I do want to talk about the key features of Kubernetes benefits of Kubernetes Some best practices to follow when you're managing your microservices Then why should you deploy your microservices using deployment and not bare metal pods? Then we do the deployment demo and there are the Deployment demo is very interesting and I'm going to show you how do you use deployment using different deployment strategy We'll conclude and I do want to leave some time for Q&A. Even if we do if you don't get time left I'm here. I'm always around, you know, just catch me able to do that all right, so Kubernetes being the buzzword has made enormous noise in the recent few years Because of the out-of-the-box features it provides to organization It has become the defective tool for container deployments on the cloud for forms around the globe It has opened a whole new era of innovation on the cloud for businesses today So Kubernetes is an open-source orchestration platform that automates containerized applications deployment scaling and management We all know it was originally developed by Google, but is now maintained by CNCF Kubernetes is extensively used in production environments to handle containers and We know as businesses continue to adopt cloud native architectures the need for scalable Reliable and manageable application delivery processes they become increasingly important So Kubernetes can be a powerful platform for managing containerized microservices or applications at scale But what does that do that allows the teams to focus more on building and delivering high-quality applications? faster with easy to change configurations and versions and Because it can work anywhere with any container on time and on various infrastructures With different environment and Configuration you can basically use the same approach whether you're hosting it on your own laptops In your own on-prem data centers could be a private hybrid cloud or with any a public cloud providers So the slide that I want to skip is this one. I like it because you know you can do The architecture of Kubernetes Understanding from the ships and the port that controls all the ships on all the activities that happen But instead of that in the interest of the time for demo will quickly summarize the architecture I'm sure most of you guys know so For simplicity, you know, we have a master node a few worker nodes for highly available Kubernetes cluster usually that's not the case. You have a few masters thousands of working nodes So so let's see one master node the master node Which is also known as a control plane his main job is to look all the operations around the whole cluster, right? Everything has to be up and running in in sync and all that So it cannot do it alone. It needs this four main components The first one that you see is the API server which is a brain the nervous system of the cluster any Communication that we as users or any other app that has to do with the cluster They go through the API server is an API So if I want to create new parts, right, I talk to the API if I want to change some configuration I talked to the API Scheduler is the one that says, okay. There's a part that needs to be deployed on some node So it will schedule it it will see what does the part needs like what are the resource Constraints of that does it need a special node like taints and tolerations But controller manager is like the control loop. It looks for any nodes that are failing spin up another node Are there any Containers that are failing make sure the current state means the desired state, right? It's it's cd is a highly available key value pair distributed database And all the activities all the events that happen that are logged in there and any kind of so Kubernetes uses the functionality of it CD to do checks on the cluster hell the Configuration data it is also used in case of cluster failure You back it up and you restore from there on the worker node if you see Docker doesn't have to be Docker can be any a container runtime because you're running containers, right? You need that software Cube lead is like an agent that sits on every node and that communicates with the API server it gets instructions Hey, I need to be spinning up a pod, right? And it sends reports periodically to save the health of the node the health of the containers Cube proxies are one that enables a communication between different containers So let's see if you have a web app running in one container on node one and a database running on node two So how does that communicate? Cube proxy makes that available So that was quick Let's move on to the next problem So the key features Starting from service discovery. It's a process of figuring out how to connect to a service Kubernetes service discovery find services through two approaches Using the environmental variables or using DNS based service discovery to resolve the service name to services IP address load balancing identifies containers by the DNS name or even IP addresses and Redistributes traffic from high load to low load areas depending on the traffic congestion Storage orchestration so Kubernetes natively provides some solutions to manage storage This feature allows automatic mounting of any storage type of your choice. It could be a local storage Network storage or public cloud Provide a story secret and configuration management This is a critical feature of Kubernetes that plays a significant role in enhancing the security of your applications It provides a secure mechanism for storing confidential information such as passwords API keys or other credentials by encrypting them and controlling access to them via PR back Moreover, Kubernetes also offers robust configuration management capabilities and hence enabling you to manage application Configuration data efficiently. So basically you don't have to combine your application code with configuration code It says outside that Kubernetes, you know, allows you to use config maps for that Automatic bin packing. This is one of the significant features of Kubernetes. Why this is where Kubernetes helps in automatically placing containers Based on the resource requirements limits other constraints without compromising on availability So it can very well mix critical and best effort workloads to manage utilization and save more sources Self-healing another one feature of Kubernetes. I call it as a this feature is like a superhuman Containers that fail because of any reason are automatically restarted If any note fails the containers that were running on the fail node are redistributed to other nodes Kubernetes will automatically stop any unresponsive containers if they do not respond to the user defined health checks And they will restrict the traffic until the containers are ready Automatic rollouts and rollback and we'll actually see an example of that This feature allows team to working on Kubernetes to define the state of deployed containers They can define how to systematically roll out changes with ease and automatically roll back a faith on failure or In any cases of emergency or alerts So in order to do all this in order to manage our containerized microservices or applications Kubernetes uses a set of abstractions and someone that you see here is deployment which create replica said and and parts and the services to hit the part now that we understand What Kubernetes is this and how it works? Let's discuss the benefit of using Kubernetes to manage containerized microservices at scale scalability Kubernetes is built to be highly scalable Which means you can easily handle many containers by deploying and managing them with ease This makes it the perfect platform to manage applications that need to rapidly scale up or down in response to changes in demand Resilience Kubernetes is built to be very resilient with features like automatic recovery and rolling updates That ensure your applications stay up and running even if individual containers or nodes fail Very important right consistency It will provide you a consistent environment for managing containerized applications Which means you can deploy and manage in the same way regardless where you're running remember I said it initially you can do on your laptop is the same consistent environment Portability now because it's an open-source platform that can run on any infrastructure Whether it's on-prem or in the cloud This means you can easily move your application lift and shift between different environments without having to worry about any compatibility issues automation Kubernetes automates various tasks involved in deploying and managing containerized applications Including scaling monitoring and load balancing This can substantially decrease the time and effort necessary to manage your application. It's very important resource optimization Kubernetes allows you to optimize your resource utilization by automatically distributing workloads throughout your cluster based on available resources and This can assist you in reducing infrastructure cost By ensuring their resources are not being wasted Developer productivity all of us are engineers are right Kubernetes offers developers uniform and standardized platform for deploying and managing applications This simplifies the development process Allowing developers like us or engineers I call them to concentrate on writing code instead of worrying about the underlying infrastructure And last but not the least ecosystem We all know Kubernetes boasts a broad and dynamic ecosystem of tools and services That can be utilized to enhance its capabilities This allows you to seamlessly integrate Kubernetes with other tools and services They would create a potent personalized platform that meets your precise requirements All right guys, I'm trying to rush through this so I can go to the demo, which is always the fun part for all of us So anyways, so while Kubernetes provides a powerful platform for managing our containerized microservices at scale It's important to follow best practices to ensure that your microservices are secured Scalable and reliable. So here I've listed some of them Namespaces what are namespaces provide a way to divide a Kubernetes cluster into smaller virtual clusters Which can be used to organize your microservices based on their function or team And by using namespaces you can limit the visibility and access of resources to specific teams So whoever doesn't need access will not be given access and you can also prevent resource name conflicts Because you belong in the namespace so if you have a part or deployment with same name because a different namespace, they won't clash You use labels to select and manage your microservices Labels are key value pairs that can be attached to Kubernetes resources and I used to select and manage those resources By using labels you can easily group and manage your microservices based on common characteristics Some of them could be like a label with the release Which environment which tier whether it's a web tier or database tier middle tier and Labels can also be used to control access using our back. That's very important Config maps we already know config maps allows you to separate your configuration data from your application data So if you make a change to the configuration, you don't have to rebuild your power redo everything, right? Secrets are pretty much like config map except that they have a purpose. This this store sensitive data Use probes to ensure that the microservices are healthy. So this is very important And you know not easy to do but they are important probes are Kubernetes resources that can be used to check the health of your microservices By using probes, you can ensure that your microservices are running and responding to the request So Kubernetes will provide you of three types of both our probes liveness probe readiness a startup liveness probes are used to check whether a microservice is still running. Hey, are you still alive, right? Readiness probes are used to check whether a microservice is ready to receive traffic and startup probes tells you that hey I've done initializing. I'm ready Use auto scaling to scale. So there is a resource called HPA horizontal port auto scaling That allows you to automatically scale your microservices based on the resource utilization or custom matrix So by by using auto scaling you can ensure that your microservices are always running at the right capacity and then and you avoid over provisioning or under provisioning Network policies another one very important now We all know every part can talk to every other part in the entire cluster, right? Not a good thing So that's that's when we use network policies to secure a microservices They provide a way to define network access controls for your microservices By using network policies you can restrict access to your microservices based on the namespace the labels or any other characteristics What does this do this can help you to prevent unauthorized access or attacks on your microservices? All right, so why should you use deployment to deploy your microservices? Deployment simplify updating hundreds of parts container images by just updating the container image neighbors declaratively So basically, you know in a high available in a in a real-time situation You don't have one or two replicas you have hundreds, right? So imagine if you didn't use a deployment abstraction But part you have to go to every parts definition and change it not not a good situation So that's when deployments help you and there are different deployment strategies that you can use The first one that we see here is called rolling update So just in the slide that you see it will create the new parts first So number two the second quadrant and then destroy the old parts allowing no died downtime for users So let's say if you were running about four replicas is going to create the first first replica of the second part Then kill that and so and so forth. So your users will not suffer any downtime The second strategy is called recreate Now this is completely opposite of rolling Recreate means it will kill all your parts first and then then you know bring the new ones Please do not do this in production imagine if you have thousands of parts, right? So now I'm going to show you an example Bringing them down if somebody was on your app and it's very critical high availability You promise the ESLA is a 99.9999 right and then you use this Somebody's on your thing and the connection is gone Imagine the kind of reputation that you're building right so you can this is for experiment I don't know why it's there is there, but don't use it in our production staging development experiment good blue green So blue green allows to have parallel services running at the same time Blue which is existing running and green is now new one That disadvantages you have double the number of parts advantages you get to test in production Now I always has to think people say Before we go to production. We have a production like environment is never same So if I if I get an opportunity to test in production, which is like This is awesome, right? So this is this this kind of strategy will allow you to do that Once you see your new features are working as expected You can route your services to new ones and then kill the other one the old ones The fourth one is canary Canary deployments help understand how new features will impact the overall system operation while containing the possible Spill over to a small group of users. So I'm like, you know, I don't want to just do a big bang You know like in the blue green or whatever. I want to start small So 10% of my users will see the new features, right? And if the feedback is good, I may open it up to 20 just basically scale it up But if they tend into word, you forget about it will go back to the old version read do things, right? So another variation of canary deployment is dark or AB deployment Just like dark the kind of deployment mainly works great to test features on a front end rather than back end So all as opposed to canary. So dark is also known as AB Users are unaware. They are treated as testers for a new feature. They are in the dark About the role as testers But what it does for us in parallel, we can collect matrix to track the user experience things like how are the users interacting with the feature? Are they finding it intuitive or easy to use or are they just turning it off? So you collect all those things right and Cool Everybody ready for deployment? demo I am All right, guys. Let's see if my playground is open running Come on I always get scared with this demo guard, right? Please behave today. So Okay So my playground is running. So just to show you I have a multi node or multi node So I'm gonna do a get notes quickly. So you'll see I have two nodes master node Which is a control plane and node one, which is a working node So we're going to deploy some parts here using the deployment start different deployment strategies So for that I prepare a list of commands makes it always easy not to fair finger and looks silly So I have some examples in my private github and I'm gonna run Up to here. So basically, you know, I'm just good cloning going into the Folder where the examples are Then I'm creating a namespace. Remember as one of the best practices create the namespace So we are an open source when being a Kubernetes demo. So that's the namespace Then I have some YAML files that which is one is the rolling deployment So I create a deployment which has a blueprint of the pods and I'm saying I want 10 replicas And I'm going to create a service of type node port that can hit that those parts, right? And then I just do a get to see how did I create the deployments and so and so forth. So let's do it I'm nervous, but let's see All right All right. So So the pods are some are getting created some are so we'll just run this again one more time I Can do a watch on it, but watch never comes out. So one more Yeah, so we got 10 replicas done, right? So what does when I created the deployment if you notice it also created replica set under the cover. So if you see The cube hurdle get a replica set, which is RS. There are 10 desired 10 current good to go So now let me see what does the app look like so I've created a service to hit the pods of type node port And if you see I Exposed this port so node port, you know, you can assign for port number from 30,000 to I'm Forget there's a range you can assign one in that range or it'll assign to you when this case. I chose it so to hit that I'm going to go here and say viewport and Three zero one two zero Yeah, so I got my app running right so now if I refresh this a few times and watch the last Five characters You will see it changing. What is that because I have 10 parts the service is load balancing the load across all the parts right Easy peasy fun, right? So now Weeks go by I'm in my two sprints have gone by a new features come out tested well And I want to do a rolling update, right because okay Let me quickly show you what the ML looks like this will put things in context So my file that I that I Then apply on walls rolling update this guy So if you see my the seedest line here 11, I'm using a rolling update So this means if I make any change to my deployment is going to use a rolling update strategy. So Now instead of really going and changing the YAML file I'm using an imperative command that hey set the image on this deployment Change this to something else in this case is green and then I'm going to check the status of the rollout So let's do this come on So if you see the rollout is happening, so it's bringing up new part killing a old one bringing up new part So what that does and I should have gone to the browser and if I hit So now I got switched to my new version, which is green awesome, right? But while that was doing if I if I and I'm a user don't know what's going on If I came here, there will be no downtime. I'll be slowly be switched over to the new version All right, and if you look at the parts the old ones are terminated and new ones are running and you can tell them by the age The new ones have a newer and they were older, right? So that was a rolling update strategy now. I want to draw our attention to a different one which is Okay before that roll back so once you do a few changes to your deployment and Things are happening But down the road you find that one of what whatever rollout you did didn't all go that well So you you can look at the history in this case the first one was you know, that was initial deployment The second one we just ran this we changed the image and this new one is not working great So what can I do to roll back? There's a new quick command that you can run and the reason I'm showing you all this as an engineer is very important to Really understand, you know, what what what are you getting into? So this really helps you to learn the the whole technology. That's how I learned it So so what I can do is if I go here and I say roll out undo the deployment to revision number one Which is go back to number one so I can buy the number and bring this I hit enter and If I go back and see what should this color change to anybody? Keep your my fingers crossed Yeah, I love it. Awesome All right, so that was rolling update now I want to show you the dangerous one that if you do this in production, which is recreate So for that what I'm going to do is first delete the deployment And when I issue this command it actually kills the deployment all the parts and the replica set So you don't have to go and kill all of them individually, but I'll keep the service So let's quickly look at the recreate In the recreate everything is same except this that I'm saying is going to be recreate So the initial deployment will say okay recreate. This is the first one Anything after that I do will watch what happens. So let's go back to the playground. I'm going to go to my I'm going to apply this YAML file. Okay, so deployment is there Good I can do a quick check if I always like to do a quick check like you do get Sarah's anything you any changes you make ever So yeah, I got parts are running yep, I got 10 of them and This one should still be blue because I just changed the strategy Now what we're trying to do here is if I do this same thing and I'm changing to the new version which is green Let me actually do the roll out to go to the playground So while this is happening, I'm going to go here right and head refresh Boom connectivity loss because I'm doing recreate. I'm killing all the parts. The new parts are still coming up Meanwhile, I'm on the side and I'm like whoa what happened here So if you do this at production with thousands of part Bad experience, right? You don't want to do that to your users eventually will come up But the way I don't know maybe a minute to and I'm sitting here frustrated Right, so that was a recreate. Don't want to do that in production. We'll just give it a second. Are you guys enjoying this? All right, I'm just buying time for this to come up All right, so it came up and you know, I'm switched back to my new one, but I lost connectivity for Few seconds here and I'm not happy. So All right, so now we're gonna do We go to kill the deployment again to demonstrate the blue green Okay, and I'm gonna switch to So recreate no recreate was done. I'm gonna do the rolling update back I'll bring my original deployment with rolling update because that is a good one. Make sure they're up and running Okay, cool now. Okay, so now Now I'm doing blue green, right? So for blue green, I'm going to create a new deployment And let's look at the deployment file quickly for the version two So if you look at rolling update deployment we do I Am doing so I'm changing the image version right here And I'm creating a new one because I don't know I want to disturb the blue Right remember in the picture the blues running people are using in production. This is green So I want to create a new deployment a new service to test them out So I'm going to go to my playground and go back to the commands and say hey I'm going to create a new deployment which I'm calling in version two and Same thing with a new service because nobody knows about the service except I know it because I want to test it right and then Now if you run the same commands of get deployments and everything else You're going to see There are two services Now can somebody tell me how do I hit the second service? The first service is still running on 21 20 30 120. Sorry. I don't want to disturb that That's the blue on people are using it. I mean, I don't want to disturb my users there They're having fun, but I want to test this because I've deployed now double number of parts So if I go to my playground you notice something I gave it a new notepad, right? So this is the one that I want to use. So what I can do is To test the new one I can come here Go and say okay hit on this board and this is green So I can sit and test it all along. Nobody knows about that as in production Happy, you know doing this once I'm satisfied with this. I Want to drive route my services to the new one and the way you do that is All right, in this case, we'll just do a little Editing which is all this one So I have my service definition file here. So I'm going to do loading update SVC YAML tab it out so So remember I talked about labels and how they group resources So this is how this this is how service. No. Hey any parts which have these two labels That's when I'm going to send the traffic right now sending to blue, but I'm going to change this too Can anybody tell me the label? green, okay so All right, so to make the change on the service. I'm going to run the q-kirtle apply NSF Rolling update SVC not YAML before I do that This one which is not this one. I don't care about that. I've tested this this 31 20 now should become green because I'm Routing all my services to that and keeping my fingers crossed. I go here and I go Hey, so I have directed all my traffic to the green parts but as A good step you always want to delete the old deployment in the old service because they don't serve anymore Any purpose to you? So you want to do a cleanup? Kill that kill that So now you have a new deployment with version 2 and the original service is not pointing to that So that was blue green Time check. We got time minutes more I'll do the canary. Is anybody interested in seeing canary. Yes. Yes. Yes. All right. All right I like that. So for canary remember I said we ramp it up slowly So as first because I have this new deployment. I'm going to say Hey scale it down to nine. So again, comparatively, I can say issue this command and If you do this, you will say, okay, I'll scale it down to nine and to check that Let me just copy this So there are nine parts running one is terminated now. I want to bring up. So this was our green, right Now let's say our new version is blue just because I had two examples if the new version is pink. Let's say I'm going to create Okay, so I have this cube portal scale, let me see if I color web app in there So let me go back to the make sure I have that YAML in my GitHub Yep, I have it and that one is green. Good enough one. So one rapid. I'm gonna start with one So all I have to do is just keep her to apply that Okay All right. So now if you do number of parts you should have 10 but See what's going it's the same service if I do a few refresh you should see a blue sometime pop up Come on now. There you go. So I have one blue and nine parts. So that is now you say, okay Now I want to do I Am satisfied that 10% is working great. I'm going to go and scale down my this one to eight Right, and I'm going to scale because I have this deployment now. So I'm gonna go back to my Commands and I can now use this to say make them To so the number of parts should be certain and if you go to the browser You should see two blues with different numbers one Same number to yeah, awesome So that's how that you can keep scaling it down scale it up and that way then you can you know Now you're done using a canary, but slowly ramping it up. So So that was the canary. I don't have the dark or AB because that applies more to the UI so that was the end of demo, I'll go back to my presentation and So in conclusion Right, we all saw sorry, you're not supposed to see my notes all good So in conclusion Kubernetes is a powerful platform for managing containerized microservices or applications at scale We also it provides a rich set of features for orchestrating scaling and managing containerized microservices Making it an idle platform for modern cloud native applications Some of those features we discussed with service discovery load balancing configuration management secrets management automation resiliency rollout rollback automatic bin packing self-healing etc With Kubernetes you can simplify that the deployment and management of containerized microservices Ensuring that they are always available and responsive to incoming traffic Thank you for your attention, and I hope you found this presentation informative Thank you. Thank you. So Now we'll open it up for questions and hopefully I should have answers, but I have experts in the house who can help me So I need reading glasses to read but when I look at you guys you all fuzzy. So it's just weird right do this I'm going to say a total of four or five years But you know when I was introduced to that back at a Microsoft conference, it just went right over my head It was like making no sense Then he slowly got got into but the only way you could learn is to get your hands dirty So you know people say oh, we have a cluster. We're running it in some managed Kubernetes clusters We have this YAML file without understanding what what a parts deployment and it's so important to understand that Thank you. Thank you Yes Managed Kubernetes or do you roll your own? Oh, I'm going to refer that to I will have to answer that or How do we answer that? Look at many do we use all I do you like a manage like EKS or Okay, you're a significant overhead to try to manage your own Kubernetes clusters. Yeah, thanks. Oh No, it's good Good questions, but some all they might rather defer it to have you considered using our good rule outs for Deploying your services Have you considered using Argo rollouts Argo rollouts Do you use Argo? We do use Argo workflows, but We have a special set of use cases where we use Argo and we are relatively new To using it in the app dev space in the application space But yeah, it's it's very much On on on the table for us. Yeah, thank you raw. I Bought the whole team so Thank you Do you use directly Kubernetes or do you use like? Tooling like I'm to manage your application. Do you know the answer? and short and stuff like that I am not sure for the answer being around one sure Do you use directly Kubernetes like playing ML file or do you use tooling like? Microsoft and to package your application a great chart and it will Deploy all the boards and not everything together or not Okay Yeah All right, so there are no more questions. Thank you for tending. I really appreciate you guys coming