 Welcome to CloudNative.tv. We are live on Search Magic with Syam, and this is episode number three. So before we start, so this is an official livestream of CNCF and as such is subject to CNCF core of conduct. Please do not add anything to the chat or questions that would be in violation of that code. Basically, please be respectful of all your fellow participants and presenters. So Search Magic with Syam actually Search Magic is a show where we talk all about Kubernetes certification, CKAS, CKAD, CKS. And we kind of go through some of the concepts, then do some of the problem solving and see how things work overall. And for the first episode, we covered the introduction to the certifications, why the certifications are important, why they play an important role, and why you should care about it. So I think that's pretty important. And we covered all the building blocks of the certification, the preparation material, like where you should be focusing on, where you should be getting the learning materials from, and all those stuff. In the next episode, we had the attempt from the Linux Foundation training team itself. And we discussed about the Kubernetes architecture, the orchestration, what do you mean by orchestration? What is the architecture of how it works? What are the building blocks? The scheduler, controller manager, ATCD. What is the control plane? What is the node? And then on the node, you have the Qubes, CDL, Qproxy, CRIC, and ICSI. So all these things we covered. And also, there was deep type on demo on Kubernetes setup using Cryo. So if you have not watched, all these are available on YouTube as well as on Twitch. So you can just have a look at that. Today, let's continue that journey. And we'll be diving into more of the concepts. Most importantly, the Qubes, the objects, like the pods, deployments, replica sets, how they work, some of the scenarios from the certification point of view. Obviously, we are focusing on that. But I also want you to be taking care of like how you can use it in your daily, like day-to-day work life as well. So we'll cover the concepts, but we cover it in such a way that it helps you both organically in your work environment as well, plus from the certification perspective as well. Okay. So that was the brief overview that I had for... So it's magic. And let me share my screen just a second. In the meanwhile, like you can, you know, just tell where you are joining in from and, you know, keep sharing. So yes, we also have two giveaways for the session that I do. So make the stream more interactive. Now, since I'll be presenting, so I have to switch back and forth between the tab and that is the adjustment that I'm doing right now. So we have to switch between back and forth. So I'll be looking at the comments section as well for like what all things are there and how things are working. Awesome. So I have my equipment ready now. So make the session interactive and the two folks who have the most, you know, interactive session, we will be doing a swag giveaway of 50% discount voucher for your, what you call, certification exam coupons. So let me just air beam that. There we, okay, we are good now. And I'll share my screen very quickly. Awesome. So I hope you are able to see the screen and let's cover some of the concepts. So I see a lot of people joining in. Welcome everyone. Girish, Sanskriti, Vanshek, Yuvan. So glad to have all of you over here. Please share this on Twitter as well because we are just getting started and we'll cover tons of, you know, good material over today to be honest and we'll do the hands-on as well. So now what we have is in Kubernetes, we have Kubernetes objects. So we have your ports. We have deployments. We have replica sets. We have stateful sets, demon sets, and basically tons of other Kubernetes objects as well. We have also the concept of CRDs. So when you have the controller deployed for the extended Kubernetes, so you can create the custom resource object as well, which behaves in a similar manner. Like you'll be having the same four things, you have the API version, kind spec, metadata section. But these are the basic building blocks which are very much necessary for getting started at least. So the first or the smallest unit of, whatever you can call it, or smallest unit, or basically where your application actually runs, your application runs as a container within a pod. So a pod will be the smallest unit. So if you know from the previous stream, we have this node which is joined to the control plane. So you'll have a control plane, which is the main brain. And then you have the node which is connected to the control plane. It has obviously different components. Cubelet, cube proxy, your CNI, which can be Docker, cryo, or container D. So we did cryo setup. And today I'll be showing you not the setup, but I have the gist which you can use readily. So, but it is container D based, very simple. And then this is a pod. Let's expand this. So pod basically consists of one or more containers, C1, C2, C3, sharing the same network namespace and sharing the same resources from within the same isolation of the pod. They can talk to each other on localhost, all these things are there. And the pod is, the pod gets its own IP, whichever the cider range that you have given while setting up the cluster. And then due to the, all the IP table rules are set up. So pod to pod communication, node to node communication, those works. But those are part of the, what you call the networking section. So today just to give you the glimpse, we will be covering all, let me show you the column, Kubernetes, CKP, sorry, kill. And we will be, a little fun is obviously the trusted ones. So CKP has one, so it's loading. Yeah. So we did cover the architecture installation, some of the things of that. EDCD backup restore, we'll cover some other it, but at least we know how to set up a basic, QBADM cluster and all that. Today we'll focus on more on the workload and the scheduling part and see like how the deployments and all things work, how the scheduling takes place and all those things. So with that, let's keep continuing and we'll go to the section of pods. So we are talking about pods. And the first thing that we have is API version and then we have the kind and then we have the metadata and you can have the name A, B, C and then we have the spec section and that we have the containers and then we have the image, which is in Gen X and then we have your whatever the name of the container is also can be same or can be different. So why I'm telling this is because the three building blocks, the four building blocks to be honest, are in any of the Apple file would be your API version, basically to which the kind the object belongs to. So there is V1, beta one, V1, alpha. So there are alpha features, beta features and the stable ones. So that's how the API version would go. So you need to see that. And then we have the kind, what actually you want to create or what actually you want to tell the QBBI server to create and manage on its own. So if you create a deployment and you say like, and you submit a request to create a deployment using the YAML file and then it will convert that into JSON object and then pass it on to the API server and then do the magic that it does. So even if you are doing the same by Cube CDL, it converts that to JSON and then passes to the API server and then it does all the magic where the scheduler will schedule the node based on the request and the limit. We'll also talk about the request and resource request and resource limit, which is actually very critical and important piece. And I will tell you exactly from where you have to learn this because the documentation for that is really, really solid. And after that, you have the metadata where you define the name, the labels and all those things, the annotations. The specs section is really the actual piece where you define what exactly is the character characteristics like what exactly is the image that you need. It can be obviously in GenX or it can be your custom image. So this is the application that you would want to run. So it can be any of your images that you have dockerized or containerized or it's just the OCI compliant image that you can run from here. It can be on GCR, it can be on GitHub, it can be on whatever the docker wants. So you have specific container registry that you have set up for that and then the name and all those components. Obviously, there are a lot of other things like you can specify node, not like this. You can specify the node name. You can specify the node selector. You can specify the resource request and the resource limit for this particular container. So there are tons of other things that you would be able to specify as well. So this was about the pod. So let's go into the next version because usually in certificate, now let's think from the certification perspective. Now from the search perspective, you might be asked to create a pod. For example, to create a pod with specific label or with specific name or with specific name and a label in a specific name space or maybe a multi-container pod. So all these things can be asked when you are in the exam, a very, very basic kind of questions. So let's see that in action as well. So now I'll switch over to my terminal window. Obviously, I have to reshare the screen. The screen, now you are able to see the terminal window. So we have cube, serial, get nodes, awesome. So this is a 1.21 cluster with one master two nodes and cube, serial, get nodes, hyphen, or wide. So basically hyphen or wide is the command that you can get additional set of information on any of the resources. You can do it on pods, you can do it on services. It gives you additional fields that you will be getting. We'll also talk about how to get the specific values because those are very important from the certification point of view as well. Basically people call it as a cube, serial cheat sheet. So you can see it has a lot of things like the container runtimes and it has container D. It has the OS, the internal IPs, the kernel that is the OS image and the status of that. So all these things are there when you are defining the node. Now, when you are asked to create a pod, so in the certification exam, the easiest way and the fastest way is to create via the cube CDL command line itself because it saves you time. Now, I have told it multiple times in the certification areas as well. You can, you have to create and you have to take care of the time because the time runs out very fast. Sometimes there are a few questions because all the questions are scenario-based. So there can be a long kind of question but it will be, I mean, you don't have to be stuck in a particular question and you don't look at the clock because that, I mean, it happens all the time. So it has happened to me as well. Like in a particular question, I can spend a big chunk of my time. I could actually waste a big chunk of my time which I could have used in solving other problems and come back later this particular question and solve it. So don't do that. So take care of the time and if you are not able to see like what is happening, then there are different tricks that, you know, you can keep in mind like if I don't know anything then I can use, I can do step A, step B, step C, have a plan that is important. Alliances are completely your choice. I have never used alliances in any of my certification exams and I have cleared all of them but a lot of people use them and they help you a lot. I'm not saying it doesn't help you but they are again totally optional and completely based on your choice. So if you are like, if you think you're like, you know, KGP is way faster than Qubectl get pods and definitely set up the alliances first and use them. It's pretty simple from that perspective. So let's do Qubectl run. So let's say you have to run a pod. So we'll run in the next pod and we'll give an image. So hyphen hyphen image into next. You can define other attributes like port and all those things. I'll show you that as well. So you can see the pod is created. Now, if you do not define anything very simple, it will create that in the default namespace. You can see Qubectl get NS. You have different set of namespaces already present over there and you have Qubectl get pods would have been created in the default namespace. So you can see it's already in container. Creating means it is just pulling the image and getting ready for that. And there are some other pods as part of the deployments that I would have already deployed on to the cluster. Now, other thing, if there are too many things that you have to apply on to, I mean, you have to customize like a multi-container pod. Very good example. Qubectl run example pod hyphen hyphen image is equal to maybe, let's say we have to create a pod with a multi-container pod with two images. So we'll be using into next. And then hyphen O YAML, then hyphen hyphen dry run is equal to I think client. So this gives you a YAML file. Now you can put that in multi-mc.yaml. So basically, you immediately get the sample of the YAML file. So some of the things that you can immediately do, like if you want to run it with a specific label, definitely do that like color red, like that. And maybe the container name has to be important. And then resources, obviously you have to set some resource limits and maybe you need to specify a node name or maybe you need to schedule it on a specific node. Actually, I don't know any name of a specific node. So Qubectl get nodes. So maybe you want to schedule that on node three. So let's give the node name, oops. Let's give the node name node three. So basically I'm combining a lot of questions. So I'm scheduling it on a specific node. So that comes as a separate, that can come as a separate problem, like schedule a node on a separate, on this particular node. Now in practical, this can be very helpful where you have your specific workloads which are targeted for specific nodes and you want to specify that on, you want to run that on a particular node. It can happen all the time. It can happen to different types of workload. It can happen to different node sizes. So basically I can have different node pools in my Qubectl cluster. So let's say I have different sizes of virtual machines connected. Or if I take example of Cvo Qubectl, which is K3S-based, I create a cluster with different node pools. In any of the cloud providers, there will be different node pools. So I add a small load pool first of three nodes. Then I add another node pool of maybe larger sizes of three nodes. So I can actually do based on two things. One can be the node selector. So let me show you that. I think I have to share my complete script else I have to switch back and forth. So anyways, what I'll do is I'll open that node selector stuff. And first I'll finish this and then I'll move to the theory section again. Also, we were talking about a multi-container pod. So let's have that over here. Image, maybe, maybe whatever. Redis, something like that, anything, anything. And it can have a name. Redis, and yeah, that's it, actually. So let's have this. Cube, CDL, apply, and see, see here. This can happen. So that's why if you want to use the shortcuts, use them, Cube, CDL, get pods. So you can see the example pod is getting created. And now there is a difference in numbers over here. So now you get to know another concepts. Like if there are multiple containers inside a pod, it will be showing us here. So one out of one means one out of one container from that pod is ready or running. Now zero out of two means zero out of the two containers defined in the pod specification is not ready or not running. So that's why it is in container creating state. So there are obviously different pod states, container creating, then you're running, your crash loop, back off, error, and all those states which are there. So completed. So all these things are there. There is a concept of init containers as well, which we can look at. So I hope you're getting the concept actually. So both of them are running. So let's see Cube, CDL, describe pod example. You can see, first it's successful in the next, it created the container important, started the container important. Next, it started pulling the image ready, successfully pulled it, created the container reddit reddits and started that. And above also, you can see both of the containers are there. So you have one container with container ID this, another one engineering system with container ID this. So you can see, you have a multi-container pod running on a specific node, which is demo three. So we actually specified the node name, which is demo three. So it actually ran on that particular node itself. And the labels for that particular pod is color is equal to red. So you can see we specified a specific label, we assigned, so you can write it down actually, what all things we did on the recap section. So we created a pod, we created a pod in default namespace. We created a pod with two containers. So it's a multi-container pod. We assigned the pod to a specific node. We created a pod with a specific label and what else? That's pretty much it. That's pretty much it that we did till now. But that covers a lot of questions actually, if you are doing, so you can create your own cheat sheet and you can practice that very fast. How to create a multi-container pod, do a trial and get the YAML file, edit that and do it. So let me just check if any questions are there. I haven't been here. Okay, I'm just going up. So we have a high level container as a container runtime, Docker container, can we use Docker container as a container runtime? So basically, there are questions on CRI and I think, I don't know whether it's the NCKA, I can't remember, you call that, but there can be NCKS, I believe. So we have to check, but it's okay. In the end, the CRI's part will be to put for pulling the image and running and all the stuff. But even if you have container D, the commands for Qubes, CDL, creating the deployment pods, replica set, services would remain the same. So I don't think that there will be any change. And what else are there? Yeah, there are resource limits which we'll be talking about. You can set resource quota, yes, on the pods and the name, namespaces as well. Yeah, there are different ways like you can set the aliases yourself and then you can do all sorts of stuff. So that's completely on you. How important is it to set WMRC? I have never said that. So it's completely optional up to you. What is the difference between node name and node selector? So node selector will basically be based on the labels and node name would be exactly specifying which node it is going to. You can have like, like I told you, you can have a node pool of large size with specific labels. So you can specify like your workload can go to any of the large size nodes, then you can use node selector easily. And what else is there? Can you explain the pod lifecycle? I think I explained some part of that. You have different, like how the pod is actually running and can two containers in a pod expose the same port? Port obviously would be not the same for both the containers because they are like local hosts. So you can think of like, you have a local laptop and you are running multiple services. If you run by a Docker, you won't be able to run two same services on port 80. There'll be a conflict as simple as that. Now let me switch back to what I was telling you, the node selector one, which is a good point as well. Where we are, stop screen. You have to see my happy face again. Share screen, window, assigning pods, share. As you can see, assigning pod to nodes, this is basically, I can share the link as well if you like. So assigning pod to nodes, very simple. You can use either node selector and attach the label to the node. So basically node selectors will be having like this, node selector, disk type SSD. And this particular label would be on the node. So any node having the disk type SSD would be the one that will be, that the scheduler will schedule this particular pod too. And then you have like affinity, and the affinity is something, which is not something I don't remember. Yeah, I don't think it's part of this certification criteria. But yeah, theory-wise, it's very, you know, it can be useful in some ways. So it's basically to provide a node selector, very simply you can do. So anti affinity and affinity is a, you know, expansion of the type of constraints that you can have. So you can have logical operations with the node selector. So if we go over here, so you can see, you can see, you can have affinity, then you can have node affinity. Now there are different things like required during scheduling, ignored during execution. So all these things are very deeply explained in this particular, you know, doc which you can go through, we'll not cover that. So inter affinity and anti affinity, so that the pod do not go to specific nodes. All these things are there, match expressions. So you can do that. So these are called the constraints. And so these are the practical use cases that some of them have given. The documentation states it very, very well. So that's what I was telling. There are courses on Kubernetes, which I have already told you, which are there that you can study from, the paid or the free ones. But the Kubernetes documentation is the single source for your, the single source of truth for you, because the Kubernetes documentation is superb. It explains the concept in depth and it has a lot of great examples. Like, you know, you have a complete deployment with affinity, pod affinity, and node affinity. So you can see it's a pod affinity. Above was the, I think node affinity, but it's pod affinity only. Check. Yeah, this one should be the node affinity. So you will be having those. Then on the down, you will be having another thing, which I just showed the demo for, which is node name. So basically it's the simplest form of the node selection, where you are not setting some of the constraints. You are not setting labels. You exactly know, okay, there is a node XYZ, which I want my, this particular worker to schedule to. Then you can, you know, schedule to that particular node. And yeah, obviously there are some limitations to that which we have to take care. Yeah, that's pretty much it with respect to, you know, the kind of scheduling that you can work. So yes, this comes under scheduling. Then we, I wanted to tell you another thing, which is called, yeah. Managing resources for the containers. Someone asked that in the questions as well. And we'll explain it in the form of, you know, kind of working, writing as well. So in the, what you call managing resource for the containers, what we have is, we go over here, oops, I have to connect again, just give me a second. Let me reload that, I should be visible now. Yeah, so that's screen should be visible now. So basically we have the managing resources for the container. So when you specify the pod, the pod spec, in that you can optionally specify the resources that are required for the container. Obviously the CPU and the memory. So these are the resources that you can specify. Sorry, specify. Now comes the concept of requests plus limits. So you can specify how much the container has to request. So how much the container has to request. So let's say like you specify some memory in the request that a container needs, let's say one gig of RAM or one gig of memory. So it will request for that. But if you do not specify the limit, now you have to understand like resource request and resource limit, resource request and resource limited. So the resource request and the resource limit. So you have one gig of RAM that you have specified for the memory. Now the first of all, the scheduler will schedule it on the node which has enough capacity. So obviously it will not schedule on a resource where the memory left is less than one gig. So it won't schedule on that. So it will schedule on the node which has, which can cater in this particular use case. So that will be there. Now, if you do not enforce this particular part which is called the limit, what this can do is obviously it, minimum it will require one gigs. And if this particular node where it is getting scheduled to is say eight gigs, then it can consume more because it has available memory. So it can consume more. So if something is not there, then it can definitely consume more. So in this, so where you can define that, we went over here. So you can see we have pod. We have the pod spec section. And in the pod spec section, we had containers. And I showed you like there was a section called inside the containers resources. And the resources were like this when we created the YAML file. Now in this, it will be expanded where you will be specifying like the request for CPU and memory. And we'll be specifying the limit for CPU and memory. Now we can enforce the limit as well. Like it should not go beyond this and request it can request for this. So you can specify in the spec section of the container. So that's how you will be able to specify. Now let's go back to actual sources and let's see one example over here. Now here you can see a pod is there and it is also multi-container pod. And you can see that this particular container from the pod is in the resource, then requests and then we have resource limits. So resource request, resource limit you have to specify both. And you can specify the memory and CPU. So minimum 64 memory and the CPU 250 and the memory will be 128, the limits and the CPU is 500. So do not go beyond that. So that's how it is getting that. And also it is like saying how the pod is scheduled. Obviously each node has maximum capacity and it should ensure for each resource some of resource request for the scheduled container is less than the capacity of the nodes. Obviously the sum of the containers from a pod for the request section should be less than the capacity of the node then only the node will be able to run that particular amount of requested CPU and memory for that particular containers in the pod. So very nicely written. So that's why the documentation is your single source of truth. It is explaining very clearly with the help of the examples as well, how you can use that. So that was the part of node selector and all that. One last thing that we'll cover today is the deployments part, which is very important and we'll also cover one very cool. Again, like we covered five, six scenarios in single example we'll again cover five, six scenarios in single example with respect to deployment. So like pods, we have deployments. Now deployments again are the Kubernetes objects. Obviously you can visit the documentation for deployments. Sorry, so you can go to the deployments and you can see like the use cases and all those things. So creating a deployment very simple. Yeah, by the way, in your exam, certification exam you have what you call Kubernetes documentation allowed. So make sure you use it very wisely because sometimes like especially for the cases like persistent volume, persistent volume claims where you cannot create them directly via the KubeCTL imperative command line way you can directly search for some of the things you can search for actually a lot of things which you will directly get a pod spec from here and you can just edit and modify some of the fees and immediately apply that. Sometimes this way is much faster than the imperative way. So in this particular case, so basically deployments is declarative way for pods and replica sets. You can describe the desired state of the deployment and in previous session, we talked about the deployment controller that takes care of the deployments and it creates the replica sets and make sure that you have all the time specified number of replicas running. So that is taken care by the deployment. So that is kind of the power of Kubernetes that you are trying to use. You are trying to use the deployment controller that ensuring that hey, Kubernetes, I want minimum three replicas for my application so that if any of my, like if my traffic is more, I can pre-handle that. So I know like traffic to my pod, traffic to my application will be more so I can have more number of replicas. I can have 10 number of replicas and there are different ways of, there are auto scalars, horizontal pod auto scalars, HPA, VPAs that can based on the certain metrics and also auto scale your deployments. So you can have those as well implemented on later stage but the first building block would be to create a deployment and set a desired, minimum number of replicas. So in this particular case, there will be minimum three replicas all the times running. So here the deployment will create a replica set and you will be having minimum number of deployments, sorry, minimum number of pods as defined in the replicas which will be running. So if you define three, it will be running three. You will be, so now the questions. So you can be asked, create XYZ deployment in ABC namespace then with a certain image then change that image then use scale the deployment but else can be there. You scale the deployment, you see the status of the deployment, you can see the status, you can record kind of the deployment so that you can do a rollback. So you can do a rollback as well. So if the image that you have specified is not right, so you can immediately roll back the image as well. That's also one of the cases, I mean one of the scenarios that can be asked with respect to the exam. And these are some of the cases that you can tie up in your day-to-day life as well. Obviously you see like if you update a particular image and that is wrong or that is not working. So you can immediately rollback to the previous one for the deployment. So that can be there. There are different deployment strategy, rolling update is there. And I think this is the default one and what else can be there for the deployment? I think that's all in all very, very summarized way of our deployment questions can be on. Obviously there can be others but for now we will do this. So we have a deployment and all those things. Yeah, okay. Let's do that. So all the commands are actually there, updating a deployment. So you can set the image and you can do that. We'll be using the same, all the same things. We'll be using exactly same scenarios for setting the image and doing all those stuff. So let's go back to the terminal. I think I'm not sharing the right screen. Okay. Anyways, I didn't type much. So caught on, got it early on. Let me share the right one. Actually I do not want to share the entire desktop. So that's why you have to see my happy face again and again. Let me share the terminal window. I want to share very interesting thing with you as well. You know, when I'm talking like this I suddenly get something which I feel like, you know I should be sharing or sharing it with you. And then it is like, okay, I'll share after this particular demo part. So things are going back and forth, but it's okay. So we already have Qtcdl get deploy. I'll just delete that Qtcdl delete deploy next. Okay. What we'll do is, obviously you can do a Qtcdl create. So deployment can be created using the Qtcdl create imperative command. If you just press enter, you will be given a lot of awesome options and you'll be given a demo commands as well. Like, okay, let me do Qtcdl create deploy. And you'll be given something, maybe I can help. So you can see usage, Qtcdl create deployment, give the deployment name, give the image name and some of the commands. Now these some of the commands and the options can be like the port, the replicas and the output, like if you want the output as a, you know, on the whatever file it is. And then you can have the dry run, obviously you can have the dry run client and get the copy of the YAML like we did. So let's do that first. So Qtcdl run in the next iPhone, iPhone image is equal to, sorry, not about Qtcdl create, deployment, internet, iPhone, iPhone image is equal to internet, iPhone, iPhone dry run client, iPhone or YAML, one iPhone is missing as always something has to be missing. Okay, it gives me a raw YAML. So again, if there are some of the scenarios where you need to edit the YAML file as per the question instructions, as per the scenario instruction, you can quickly get this YAML file, store that, you know, just do this in a dp.yaml and we can do the dp.yaml edit. So if we can do the Vim BI of that, we can maybe set the number of replicas to two and sorry, number of replicas to two, we can change the, you know, the spec, the container name to internet 2 or something like that. So anything can be there that we can change and the strategy we can apply a specific strategy and what else we can also specify again the resources section is empty, but we can specify the results and the limits over there as well. So let's, so let's create that actually and we'll remove all these, create deployment in GenX image in GenX. So let's press enter, yes. So we have successfully created the deployment with image in GenX, obviously we can change the image. So the, yeah, next question is like, I want to change the image. So let me tell you the screen. Cube CDL get deploy. So we can see in GenX deployment is there. So basically we can set the image for that, very easy command is there. Cube CDL set image for the deployment in GenX and from in GenX to in GenX 1.18. So you can see the image is updated. I should have actually shown you the previous one as well. You can describe, we can describe, yeah. You can see the deployment controller, scaled up and scaled down because it is now doing the, changing the image. So the latest image is getting changed to what you call 1.18. So that is there. Now this command is already done. We should not do that. So what we can do is the scaling. So next part was scaling. So Cube CDL scale deployment in GenX to replicars maybe three. Before that, Cube CDL get pods. So you can see we have 55 seconds ago, just one in GenX pod running for that deployment. Now we are just making it three and we'll do Cube CDL get pods again. And you can see two more new pods are there. Actually we can do Cube CDL rollout status in GenX. So you can see, so this is to check whether the deployment is successfully rolled out becomes very handy in some automation stuff. We keep checking the rollout and then we can move on to the next step, something like that. So if I go back and do get pods, I'll be seeing all the pods are in running state. So we created the deployment. We scaled that, we changed the image. Yeah, we can record that. So let's change the image actually to 1.19 and hyphen hyphen record. And if now I describe, we can see the image is changed to 1.19 and this is how it does, it is being done. And now we can actually see Cube CDL rollout history deployment in GenX. So you can see it is this and now let's do Cube CDL rollback, is it? Let me see, what was the status undo deployment? So let's roll out. So we can do actually the rollback by undoing the deployment. So let's do Cube CDL get deploy and Cube CDL describe deploy. Just to show you that it is being changed back to 1.18. So these are some of the scenarios that can be there in the exam as well. And these are some of the scenarios that can be helpful for you in your day-to-day activities as well. So you can have all these things. So scaling up, scaling down is very common and when implemented with Horseshoe or Autoscaler, you can have that. And obviously, yeah, you can also do one thing which is called Cube CDL get pods. So you can actually Cube CDL expose. And if I just click, sorry, Cube CDL expose pod and hyphen, hyphen help. So you can actually provide hyphen, hyphen help and it will give you some of the commands which are very handy. So you can see this create a service for a valid pod. So Cube CDL expose pod with a valid name and then you give the what you call a port number and you can also give like with the name frontend or just like without any name. And you can also specify the type. You can specify the protocol. So let's do like the pods we have. Let's do one quickly get pods. We have five minutes, Cube CDL get pods, come on. Cube CDL, there's one, let's take this one. Expose pod into the X, hyphen, hyphen type. There are different types of services. We'll cover some day, node, pod, but for this cluster IP, hyphen, hyphen, pod, AP. Cube CDL get SVC. And you can see how my index pod on pod 310, 3107. So Cube CDL get nodes, hyphen over wide. Now I can take any of the IPs. Obviously, external IPs are right now not displayed because I have not done that particular portion. But yeah, these are some of the things that we have done by setting up the cluster which I'll just show you how to do that very, very, very, very quickly, come on, clear. So we have done today pods. We have done today, sorry, we have done today pods. Let me go switch back again once again. But let me see if I have my air beam still alive. I think it's again gone. Yep, let me just share back with G. Stock share, happy face, share back. And we share this particular screen. Yep. So we have done Cube Ready's objects. We have done pods. We have done deployments. And we have seen like creating them, editing them, multi-container pods, the rolling them out and scaling them, status, record, rollback, changing the image, and editing the YAML file, what are the resource and the request that you can specify in the resource section of the containers? What are the node names? Scheduling them on a specific node based on node name. Node select right out of the theory wise. So all these things we have done. Very quickly, just have a look at this particular gist. So I created this particular gist to set up Cube Ready's on, basically these are the three nodes which are there. So I created three instances, compute instances on a CEO. So I ask platform and then I have just done this where I have set obviously the container D because I told you it is container D based and then the swap off FSTB, FSTAP, and then we have all the fancy stuff for setting up Cube Ready and Cube CDL and holding them. And then yeah, this is not required. So you do this, after that we obviously do Cube ADM in it and specify the CIDR range and boom, after that we do all the steps that are defined in the output of the Cube ADM command and then we join the respective nodes. So that's pretty much it. Let me go to the chat and I think we have, I don't think it was echoing. Yeah, let me go back to this. In the exam question for particular node, should we go for node name or entity affinity? Exactly depends really what the question is and obviously the weightage as well. I would personally like if it's very straightforward that this has to be scheduled on node, this why not go for node name? It's very simple. And your labels can be there to just change all sorts of stuff when node affinity has to report life cycle for scheduling. Yes, you give a good hint, okay? There are two rolling, something plays the promise, recreate rolling update. Okay, there's a good conversation that is going on between usmuff tab. I don't know the name and Girish, both of them. So which is pretty good. Yeah, we, okay, I'll cover max surge and max unavailability sometime for sure. But yeah, I think it's already answered that. I hope you are asking the link for the gist. Let me first see if it is public one. I think it's public only. I never made it the other way. Anyways, this is the command and yeah, I use this particular platform. So, obviously, CBO is where I work. So you can sign up or you can log in and you'll be getting the free credits as well. Yeah, that's pretty much it. If you want actually, you know, more on the deep dive stuff for some of the, you know, cloud native technologies, then obviously I have my own YouTube channel as well. So you can go to samparton.com slash YouTube, where I stream for the cloud native technologies, which obviously relate the Kubernetes stuff. A lot of things are over there if you go to deep dive sessions. This particular, there was a question last time, like I should cover more other of the cloud native things, but this particular streaming is only focused to search magic, only the certification stuff. So we'll be doing cloud native. I do cloud native on my channel. You can subscribe to that and you know, a lot of that stuff. But you should definitely follow cloud native.tv. Awesome shows out there and awesome cloud native folks, the ambassadors, the cloud native community, they have come together to put down a lot of great schedule for that. And a lot of great shows are coming up and there are also sticker packs available on store.cncf.io that you can check that. So yeah, very, very good question. So is there something that can be done on CKS? So I am doing something on CKS very soon. So stick webinar CKS. Okay, it's, I have to enter my name as well I think then. Yep, so this is the webinar and it's on 12th of July, 22nd of July, sorry, where I'll be talking all about CKS preparation and I'll be sharing a lot of things, that as well. I can share the link of that, it's okay. So just keep an eye, yeah, you can actually follow me on Twitter because that's where I'll be keep on telling about all the certification things and do at different places. Obviously at some point, we'll be covering CKS on certs magic as well, but not very immediately because we are trying to go step by step so that people who want, I want people to get the knowledge for kind of free and get them the right set of track. So if you just want that sort of thing, you can just watch episode one, episode two and this is continued continuation, episode three and next one will be based on other sets, maybe services, maybe networking, I don't know, I haven't decided that, but we will choose some of the topics and we'll discuss on that. So that's how it goes. We'll try to keep them independent. So the previous topic was absolutely independent. If you have the humanity setup, you can skip that. If you know about parts and problems, you can skip this one. So I'll try to make those independent and we'll continue like that. And for the winners, let's see who were the most active in the chats. So I think Girish has been active since very beginning. So Girish is one of the winners for the swag. So Girish, please do reach out to me on GitHub, sorry, on Twitter so that I can give you the coupon and let me know, let me see the second one. I think Usmavthab1995 has answered the questions really well and kept it more attractive. So first of all, thank you for answering and yeah, Usmavthab. So thank you for answering and you are the second winner. So congratulations to Girish and Usmavthab. Please reach out to me on Twitter, my DMs are open. I'll let you, I'll give you the coupon code, 50% off coupon code on certifications and hope that you get certified soon. And hope you like the session. Please share that so that people can join again. So this is a buy BT on Thursdays, 8.30 PM IST or 8 MPT, that will be continuing. Thank you so much for tuning in, folks. I really liked interacting with you all and explaining some of the community stuff that I like in simple terms. I hope you liked it. Please do let me know how things are going and what else you would like to see. You can just tag me on Twitter and then tell me and yeah, please let me know the good things as well like what you really liked about the session. Thank you all and please follow Cloud Native TV and enjoy the awesome other shows. Bye.