 Hello everyone and welcome to cloud native dot TV. This is the search magic show with Siam and today is very exciting day before we start Let me read out the CNCF code of conduct So this is an official live stream of CNCF as such subject to CNCF code of conduct Please do not add anything to the chat or questions that would be in the violation of the code of conduct Basically, please be respectful of all your fellow participants and presenters So this is streaming live on twitch cloud native TV and make sure you hit that follow button and make sure you make the stream interactive So search magic show is obviously all about certifications and in the last stream I discussed about the importance of certifications what Kubernetes certifications exist and Where we are with respect to these certifications. How many are there? What is the course curriculum and how things work during the exam? So all these things were discussed in the previous one I have posted the YouTube link for that if anybody wants to check that out and In today's session, I'm joined by Tim who is an official instructor at Linux foundation and Welcome Tim to the show. Please introduce yourself to the community Hi there, thanks. Thanks for having me. I appreciate it as mentioned. My name is Tim Tim Sarawitz I'm actually the training program director for the Linux foundation and I'm also the author of the three classes We offer instructor that training courses in Kubernetes that would line up with the CK a the CK ad and the CKS exams Awesome, so glad to have you Tim and this will be a really exciting stream because we have tons of learning So basically if you have ever thought of you know, starting with your CK a journey and you know Learning about that then probably this episode would be the best one to start with because we are actually starting kind of from scratch And we'll be discussing what Kubernetes is its architecture components What are what's the yaml components? What was the namespace and all that and we also have Labs that will do life. So live labs will be done so that to show you how things are set up actually and Obviously it is one-to-one mapping with the certification exam because you need to have an environment Obviously for practice you can do that and also they can be scenarios related to that on on the similar, you know concepts So today's curriculum that that we are targeting is the cluster architecture and installation and the configuration Not touching much on the configuration, but this is the this is the from the curriculum This is what we are targeting to achieve from this particular episode. So, yeah without wasting any time We'll get started. Just make sure to follow that See a cloud native TV. Yes. One last point is There are two giveaways 50% discount coupons that I'll be doing in the end So whosoever is most interactive in the chat asking a lot of questions Making it interactive would get those two coupons So with that maybe Tim we can we can start with like what Kubernetes is Kubernetes introduction Sure sure Well, yeah happily. Thanks very much So starting off Before I start sharing my screen one of the things I always like to cover is what is it that Kubernetes solves that Previous ways of doing things didn't and and this is this is probably the biggest takeaway if you don't get much else It's that Kubernetes is not just another VM management tool And I want to say that before I share anything because I want to make sure everybody kind of gets that This is the biggest hurdle people assume that this is just oh, it's another flavor. I use this other I used to open stack. I used VM where this is another VM tool the architecture itself is different as The architecture is different. It also means that our applications need to be different So part of being an admin for a cluster of course is the care and feeding the installation the various things that go into it But it's also feeding back to the other people in your organization the developers the other folks you might be interacting with So they also understand that this is not just a another VM management tool It's it's distinct which is I think why talking about architecture is probably you know that perhaps it's it's just me But I I think this might be part of the the heaviest lift the the most difficult stuff to get but also the most important So with that now that I've kind of hopefully impressed it upon you let me go ahead and share my screen and share screen button and Application window oh, it shows up both screens. Okay. Well, let me try that a different way So let me share this PDF with you guys for now, and we'll share the other one. So Hopefully you are seeing it. Am I am I need you are good. Okay. Good deal. So this is a page from our Course the Kubernetes admin course LFS for 58 and it's it's basically the first chapter and it's one of the things that we get into and the Kubernetes is Orchestration software so when when push comes to shove why do we care about it? Well it orchestrates think of orchestra Everybody's playing the same music at the same time. Well, the Kubernetes is an orchestration tool for containers So that what that does for is what we're looking at here with this graphic is on the left We have our control plane on the right We have a worker and those are some of the the terms that we use we're moving towards Inclusive naming so be aware that in some of the commands you might see it called other stuff So some of the commands inside of Kubernetes still use the previous name But we're moving to control plane which I'm going to use shorthand of CP just kind of easier to say and then a series of workers Who might also be called minions in some documentation you run across So the the nice thing is is that Kubernetes itself follows the same paradigm. It's a decoupled transient microservice-based Tool and that's what we want to deploy not we're moving from VMs To containers and it's not just well. I'll containerize my VM We also want it to be decoupled meaning that it's not reliant on somebody else It's transient in that the various components will be killed on a regular basis This is usually the biggest stumbling point when you're like, yes I'm going to kill this container today three times and I'm going to move it to a different note if you were to go To a legacy DBA and say I'm going to terminate your database three times today They probably would have an issue with it. You'd have you'd have some some long conversations about that But this is what I'm trying to get to is that the whole nature of this setup is about going away from a VM management Into a different architecture. So traditionally we had it with legacy Environments we had legacy apps that were monolithic and then finally tuned for the equipment we had so whatever, you know You had a innate processor box and 32 gig of memory Well, you know chances are the load would eventually get to the point that you had to do some tuning and tweaking and optimization That makes sense, but at some point you have to keep buying bigger and bigger boxes Or you have to do more and more tuning and that makes it very unique to that equipment Then if you want to grow to a larger system, you have to start this process over again So many years ago when containers were first really becoming a thing The people at Google started a project called Borg and Kubernetes has actually been around for well almost 20 years now But the first 15 of it were as this in internal somewhat secret project called Borg Google used it to run their business around the world instead of going towards Mainframes which a lot of other big companies have done they were said well We can't we'd have to keep buying bigger and more expensive boxes So they went the other direction now We're going to go with commodity systems, which is I think a polite way of saying modestly priced or low-end We are not investing in bigger and bigger servers with fancier center planes and and more needs for for high-speed buses and Complexity and ongoing cost now we're talking customer replaceable units that I've rack units that I can swap out So what we want is our application Whatever that application is to run across lots and lots of systems that each one of them doesn't necessarily have to be important And that's what Borg was really about doing. So if you've ever used a Google product gmail and gmaps Maybe anywhere in the world you were probably leveraging Borg at some point So when when they gave it away they gave away, of course not everything just the core of it And that's what became Kubernetes, which is pilot in Greek technically It's the the oarsmen the person holding the wooden ore in the in the water But we call it pilot or helmsman the person steering the boat It's orchestration software and the purpose behind this orchestration software is to Have an application running across lots and lots of nodes. So we don't want big notes We want lots and lots of commodity nodes We aggregate then all the processors and all the memory together to say oh I'll my computing environment is capable of having this app with this, you know 512 different processors working on it. How we get there is by running our containers, which are Microservices, so we're not looking for large monolithic apps We want to divide that up into various tasks and then run that to different places So we would have instead of a monolithic app that might do everything. Okay This is the front end and it accepts a an API call and then I have another separate authentication microservice someplace else and then I have a Database and I have something else but you divide it up by the tasks Then there really isn't a definition of how small a microservice should be but we want to make sure it's scalable and Durable and that's where the decoupling comes in so we want to transient meaning I'm willing to go away and be regenerated or whoever I was speaking to I'll wait for them to come back We want to write that into our code and then the orchestration software that we're running Which is Kubernetes handles that it says well I will take care of it if you are if you go away. I'll give you a new one And so that's where we're going that's the high-level view of why we care about Kubernetes what it does for us and Why it's not just another VM management tool if we kind of get that understanding that we're going away from monolithic apps into decoupled Transient microservices and why then as we talk about the components that do it. Hopefully it will make more sense so on the left-hand side here we see our control plane and The most of the stuff that we see in this graphic are actually containers themselves There's one exception to that and that's this container called cubit, which we'll talk about in a sec But let's follow a call from the outside world through the process of Perhaps making a pot and we'll define and and talk about the components along the way So on the left-hand side, we see that there is a cube CTL command or cube cuddle There's a ongoing email about what's the proper way of calling that tool? So let's let's call it cube cuddle for now what cube cuddle actually does for you is Among you know many things not just one thing but the kind of the the main component is a curl It's a curl request with some sort of HTTP verb get post delete and so forth As a result, it's an API call So we're making a curl request which you can do either you through cube cuddle Or you could generate your own curl command if you know what the certs are and you send it to the cube API server As kind of a self-documenting name there the cube API server handles your API calls You'll notice when looking at the various arrows all of the API calls everybody talked to the API server The API server handles API's in keeping with a decoupled Transient microservice concept all it does is handle the API So it's not actually handling and managing what the API call wants to do It's really just arranging and does three things first are its authentication Are you really who you say you are that's done by default through a token of x509 token But you could also point yourself at a single sign in you know and through a web hook request The second thing it does is Authorization whatever the curl request was are you authorized to do that? We do that using our back role-based access control so if you are You know if you are really who you say you are and you're authorized to create or delete or look at whatever that component is Then the third phase of what the cube API server does is called an admission control or admission controllers Because there's more than one this is where it actually Handles the API call In this case let us say that I asked for the creation of something called a deployment Which is a default operator that we would use so I send a curl request to the API server saying please create a deployment for me the API server then assuming that my token is proper and then that our back says in this namespace You are allowed to do that then we'll communicate with all of these other pots each one of them has its own Particular purpose it's microservice so at CD you'll notice the only agent talking to at CD is the API server The at CD keeps track of the persistent state of your cluster It's not a database for end-user usage This is just for what's going on with your cluster and that information can be divided into two parts The spec which is what it should be and the status, which is what it is I'm oversimplifying it to some extent, but that's what at CD is keeping track of does it in an adjacent format And so what should be and what's the current situation and it persists there? It's also kept in memory of the cube API server container That's and it checks the cache and if it needs to it'll write there or reference the information So only the API server talks to at CD and so when I make a request I would like to create a new deployment. It will communicate if it doesn't already have it in its own memory Does this token match is the RBAC setting appropriate and then that the spec needs to change there needs to be a deployment Now if that's kind of where its job for now ends the cube controller manager That's at the top part of the control plane. That's that's our brain that Container has all of our operators in it and that's a key phrase for understanding kubernetes is operator Sometimes you'll see them called controllers. Sometimes people will reference them as a watch loop the more modern term for it is operator it operates on something We the entire nature of this orchestration system is decoupled and transient It's an understanding that whatever I was talking with is going to go away how we do that is through a series of operators that are constantly asking for The spec what should be and then the current status if they match It just asks again over and over and over again all the time if that's all it does What's the spec? What's the status what over and over if the spec and the status don't match That's where the operation part comes in so in this case when I made a request for Operator sorry for a deployment then a moment later the cube controller manager would make a request saying has the spec changed Yes, it has there is supposed to be a new deployment called. Let's call it test. Oh, okay So it gets the the spec has now changed. There should be a deployment called spec or a test Moment later says what's the status is there a deployment called test? No, there's not I have something to operate on So the deployment Operator running inside of the cube controller manager says there's a difference and I will operate upon it Create this deployment Now then it goes back and forth a lot of back and forth between the brain Which is your cube controller manager and your cube API server? the deployment operator Actually manages a different operator called a replica set So this is another part of understanding the architecture of Kubernetes that you might have these watch loops watching other watch loops Which watch resources for you so instead of having one operator a watch loop. It does everything. We have a Decoupled Operator so you do your task and I'll do mine and we'd be focused we've be updated independently We can do what we can optimal in our job and be developed separately Which is the same concept of the entire cluster and what we want our applications to do as well So the deployment operator says well, do I have a replica set? Same thing goes to the API server. What's my spec there? Do I have this replica set? Well, you just created one and the status is no it doesn't exist So a different operator is formed called a replica set the job of a replica set is keeping track of replicas That operator makes a request How many replica pods do I have? That are using this pod spec so they are replicas meaning they use the same specification thought they're running the same image They're using the same components and how many of them do I have so if I haven't told it Otherwise the replica count would be one so the replica set says my spec says I should have a pod one pod How many do I have it does this request by a label so the architecture is based off of Some sort of operator that uses a selector that ties to labels That's all that ties everything together. Yeah, of course, there's names and other components But when it really comes down to it Each of these operators doesn't really know what components it should be keeping track of not from one call to the next There isn't a session concept. It's What's the spec and what's the status and how does it know which I'm talking about that? I have a selector that matches a label. So in this case the cube controller manager replica set operator says How many match this particular label? Test app is test None. I haven't I know yet. You have zero. Okay. I will operate on that information And so a back-and-forth all of this is happening between the cube controller manager and your API server all of that logic All of that comparison is happening just there. We haven't even gone to our workers yet I need to create a pod so a pod spec is sent to the cube API server saying there should be a pod running this image with what of With other default parameters that the operator has sent the pod spec goes to the API server And then of course, what does it do authentication authorization and admission control now? I have a pod spec I need to send that somewhere to to run and then who do I ask? Tube scheduler cube scheduler. That's the next pod running on that CP node its job then singular job. It's a microservice. What does it do? It's schedule stuff. It is getting information about the available nodes and their condition You know what size are they maybe in schedules? Very flexible. You can have multiple scheduler So there's a wide range of flexibility here. Is there a taint or a toleration? I should be aware of it's looking at all of this information But what really comes back from cube scheduler to the API server is just Use this node use node to so it does all of the logic as far as what's optimal according to the algorithm of the scheduler predicate there's one part of it where it takes away nodes from the possible list and then priorities of the remaining nodes that are still in my list which one is best the scheduler returns to the API server and says I Choose worker number two whatever the case may be At that point the cube API server will again doesn't really do anything but handle those API calls So it will persist some of that information to at CD saying it's supposed to be running on worker two and then to Cubelets so let's go with that middle workers will be worker two It sends it to cubelet Cubelet, which runs on every node is what actually starts your containers doesn't do it directly Cubelet is a system D service. So it's the one thing here. That's not a pot It's what starts all the pots. So cubelet gets the pod spec It's cubelets job to talk to your container engine Whatever that container engine may be and that's just it we don't the cluster doesn't really care what the actual engine is So it could be Docker. It could be cryo container D frock D. Yachty lots of options out there And we don't orchestrate and insist on any one of them as long as cubelets on that particular Node knows how to talk to the engine and tell it what to do then then it's happy. We're all happy So in this case the pod spec is sent to cubelet, which is a system D service accepts the pod spec and it goes through a process of Do I have everything in this pod spec now at the same time that it's doing that? Do I have everything in this pod spec a? message is sent from the API server to Every cube proxy not just the one but every single one of them and that's an important thing to understand is that yes only One worker in this case with a replica count of one only one worker is getting the pod spec But when the cubelet is handling the container it's it's a Microservice so we have to proxy that's handling the network side of things So if there is anything having to do with the network being configured that actually happens on all nodes Which is why you can talk to any worker any node really and still get to the pod Even if it's not where the pod lives so we have that flexibility everybody gets these rules We have a network plug-in running that helps that communication as well So one cubelet gets the pod spec all of the proxies would get any necessary information and Arrange your IP tables for that layer of communication So going back to cubelet accepts the pod spec and it says well What do I need if there's a volume that is? listed cubelet is Who talks to the kernel to get that volume mounted and this can be important when we start talking about access to our vibes It's important to understand the container does not do the mounting it's cubelet that does it and that happens before the container is even started so it mounts it talking to the local kernel and Then makes a symbolic link available to wherever the container will end up being If you have these things called secrets or config maps This is another part of the decoupling of our environment We want to have the smallest damage possible with any kind of Parameter or value or file that might change we want that to be decoupled and separate so we can do that in a way called a secret which would be encoded or encrypted or Neither encoded nor encrypted but more flexible would be a config map So it's the cubelets job to request all of this information. So mount the resources download any secrets Work with any of this when it has the resources that were in the pod spec, whatever that may or may not be when it has everything then the pod leaves its pending state and Cubelet tells docker go ahead and start these containers One of the things that happens is there's actually a pause Container started first that holds the IP address So your pod your your containers do not even know what their IP will be It's an ephemeral IP and they don't know what it is until they're started We don't have a inside of the pod networking So some people who are used to docker kind of assume there must be another layer going on Where they're using doc or a cryo? It's assigned and you have one IP per pod This is probably a good time to talk. What's this pod that you keep talking about Tim? Well, what we actually orchestrate in our environment are pods a pod is one or more containers That have a single IP address They share a network namespace and they have equal potential access to storage That's what we actually orchestrate by pods via the pod spec The running of the container is not something that Kubernetes actually pays attention to it just talks to the engine Which should do that for you which could be docker or cryo container D and so forth Docker was the default if you use cube ADM It would still be the most typical and probably easiest way to do it But be aware that now that docker is a kind of got pulled in to Morantis really isn't docker anymore that the community is definitely moving towards other options Container D or cryo Red Hat uses cryo already. So there's a lot of people using it in that sense Container D is pretty straightforward to use and you can do other stuff So the engine decision from a cluster admin perspective might be something that you want to sit down and and have Conversations about when it comes down to it as far as Kubernetes is concerned. It's compliant engine Runs a compliant image. I don't really care Nobody would know and that's just it. Nobody would know what the engine is if you're running a compliant engine So hey, this my life is much easier than I might want to have a feature that this or that Engine does for me. For example Container D allows me to run g visor very easily. It's easy to get it up and running g visor gives me some security That might be a reason to go with container D cryo is something that's used in in red hat So there's a large install base. It's well known well understood in that net realm So you have choices, but when it comes down to it a compliant engine runs a compliant image and nobody knows the difference It just runs so Cubelets responsible on whatever that worker is and your worker by the way could even be a windows server Because the overall cluster is like well, I talked to cubit. I sent the pots back to cubit It's cubits job to talk to whomever or whatever that engine may be so at this point cubit has all of the resources that it needs and it communicates to Docker or cryo or container D. Whomever it is says, okay start that like here's your IP address Here's your other parameters start that container for me. So that's it We now have our running replica how our system does orchestration then is the back to the control plane the Cube controller manager has those operators. They never stop asking. They're always asking. What's the spec? What's the status? What's the spec? What's the status? So if your container were to fail if your node were to fail just go away on it It just blips and so he pulls the power cord. Well, they those watch looks like do I have something that matches these labels? And the deployment says do I have a replica set the replicas? Yes. Yep. Still here. Okay, great replicas That says do I have a pod? No, you do not have a pod that has that label Oh, well spec doesn't match the status. I better start one and the process continues Start a replica of this pod for me goes to this API server API server as the scheduler Of course, if node number two is just gone now the schedule says well, that's not a good choice You're gonna go to worker number three and this process will continue Times as many replicas as you want as many different options as you want So we can orchestrate and anything you can go away. We can add new nodes We can grow our cluster from one node to five thousand nodes We can scale our pods from one to ten we can use the deployment can deploy multiple replicas that's for you and Change from using the one version to the others you can rolling updates and rollbacks this kind of decoupled Transient architecture that leverages ongoing operators or watch loops always asking always checking means that we're expecting something to change And we operate around it So that's why everybody really likes a lot of many reasons to like or love kubernetes We are expecting it to have issues and it is built in not the most efficient way There's probably more efficient ways to do it But if only if you measure it at the small end if you say I have hundreds of machines that are low cost But my app is now running across them and if anything happens this operator will just start it again shortly And it'll be made available to you talk about how Q proxy and our network plugins are running everywhere So it doesn't matter who you talk to we'll get your traffic to your pod wherever it may be Whether it's one replica or 500 and that's kind of a quick run through of the major components of the architecture I mean that was not a quick run through that was a very detailed run through for the architecture and the components and how How actually a person writes cube CTL run an image and a pod name So it will what steps it takes to deploy the complete to actually run that Small application or a microservice or a simple engine export on the Q&A system So I think that was a complete end-to-end you know detailed explanation of all the components which are there on the control plane which the CP and Also on the worker nodes where how the cubelet is working how it interacts with the container on time interface a CSI drivers if storage as the storage has to be there So I think that's that's pretty neat introduction to the architecture by far the best one I have ever heard to be honest and people do agree with me to in the chat So I'm not lying. So people are agreeing that it is the best one So by now those who are watching you might now get the idea what humanity space because Tim has explained very clearly Like how the shift has happened and communities was there internally for a lot of time and then the core was you know exposed to the open source basically and Then this is the architecture that you are seeing on the screen pretty clear all the components have their own, you know own respective Meaning and the purpose in the ecosystem and the controller manager the brain the API So all the communication happening the ETCD cluster state and your cubelet is responsible for running the pods interacting with the cryo and the CRI proxies for your you know The networking IP table pooling and scheduler is for scheduling the nodes Right fit node for that particular workload. So I think that that pretty much is covers the introduction to communities and how a pod Runs on communities because these are the basic building blocks like a pod deployment replica set and these are the components The API server controller manager ETCD scheduler cubelet cube proxy. So with this, I think we are You know, we are now in a good state to to start exploring basically if people want to you know Set up something set up a Kubernetes cluster then probably how do they do that now? This is you know, like I said before this is a search magic show everything ties to the certification So obviously you you have to have a cluster to practice that is very important So this will not only help you to stand up a Kubernetes cluster, but also it can be helpful during the exam because you might have a question that where you're asked like You know create a Kubernetes cluster using QBADM. Then how would you do that? So let's let's do the lab for Creating the cluster them. Okay, sounds great and in our courses We don't we don't write exam specific courses just just to kind of for warn everybody instead We try to make you the best admin possible, which of course also means that you'll be well well prepared for the exam So it's not that we we don't ignore the exam But a lot of times people expect a brain dump like well Just tell me what's on the exam and that's not what we do We want to give you the skills to go into a production environment and and get the job and do the job Which is what certification is also about so I always like to to preface that when people say well Is this exactly what I'll see on the exam? It's all the topics all working with the tools that you will need, but it's not an exam specific thing So the way our our labs are written I write them to be as Flexible as possible we use a two node cluster and that's to expose you to networking issues And the evacuation from one node to another you could run Kubernetes other ways. It's very flexible. There's 60 or 7 conformant Software clusters out there. So you have options, but we try to expose you not just to this would work for the exam But what am I going to see when I get at the job? What is it that my cluster is going to look like so we use a two node cluster and we use cube ADM to build it I've written the labs so that you could use virtual box Vmware to spare laptops are sitting around you can use Google Amazon digital ocean many options because it's just two instances the only provider that tends to have headaches and And we we tend to just warn people just so you know is as Azure that have they have their own some Networking things that are kind of interesting there and they tend not to run but it runs everywhere else with just two instances So in this case, I'm you would leverage. This is Google Cloud. So I'm not using their Kubernetes. I'm just using two instances I am not able to see I'm only able to see the communities architectures. Thank you Thank you. I'm used to sharing my entire screen and not just the yeah Thank you for letting me know so let's share that window real quick And also very very very good point said by Tim that even the source magic show or anything That is there any training material that that CNCF has produced is basically for making You enable to do actual tasks at your place at your workplace and in the last episode I discussed exactly the same things like why certifications are important because the learning journey will prepare you For your jobs at your work and everything Absolutely. Are you seeing the Google screen now? Okay, great So I I've just set up two nodes to be ready for the lab one I called CP the other one I call worker just so you know exactly what I did is I went to create instance and I know this is going to be really slow now that I'm trying to do it for everybody else But the point is that you set up two instances the big heavy lift that most people get stuck with is the networking side So I'm just going to call this test. I'm going to choose a location to do it I want to have two processors and eight gig of memory It will run with less, but if you ever run out of resources, it's it kind of rounds out and it gets confusing So at least two processors a gig we're going to change it at the moment. We're still running in boon to 1804 because that's what the exam uses the 2004 is going to be coming soon and as soon as the exam team updates then hopefully within a week I'll get my stuff up and running and match whatever the exam environment is So then the hard part that most people get stuck at is down here talking about networking We don't want anything between our two nodes and in most environments whether it's virtual box That's not really that open. You actually have to turn it to Hermesquist mode it with VMware all of these key KVM Kimu whatever it is make sure that your two nodes have nothing Blocking traffic between them later once you have it working. That's when you go back in and start adding firewall But for now, let's make it completely open So you go to networking and you can change it in this case I have a network that's called for class and it if you dig into what it is There's nothing blocked everything is open entirely open. So there's nothing between our notes That's usually the hard part the setting of the environment virtual box People don't realize that it still doesn't allow all traffic KVM Kimu and your OBS switch may not allow all traffic So make sure nothing between your notes That's the hard part about this and then you create it and what you end up with is In this case, I have a node that will be my control plane and another node that will be my worker Amazon has the same sort of thing. So here it's called a VPC Amazon It's I'm liking that what they call it but same concept make sure you go into the network tab and allow all Traffic not just this not just that like oh, I'm sure this is all I need all traffic Worry about it once you have it working to tighten it and lock it down so when you end up with it then you end up with a Access to your notes and let me Share that screen. So stop share share Share screen application window. Okay, so I have an application window here. I'm just using a Tool called the terminator. Hopefully you're seeing two different terminals. This allows me to go back and forth on the top I have a my I've logged into my control plane on the bottom I've logged into my worker and so far. I I haven't done really anything at at this point So what I want to do at this point is to get my system Installed and up-to-date so I'm gonna go ahead and become Ruth And then I'm going to update and upgrade my environment just to make sure that it's current So I'm gonna focus on the control plane for the moment So I'm gonna zoom into that so you can just see that as as it runs and as it goes by so your Depend 1804 is is getting a little old so you might have the ask some questions about okay during the update Do you want to allow restart? Do you want to use the local version? And you might have you asked a time and date questions if you ever install cryo instead And so in this case Hopefully it asked me these questions shortly But as it as it's installing where we're going with is we we get the OS up-to-date We add a repository to get to the software Then we install the software and use the QBADM and knit command so that's That in this case didn't ask me any questions, but it might so if it does allow the reboot and then keep the local version of there now in this case you might want to if you don't have in a Editor you might want to install one like Vim Emacs nano don't don't really matter Just make sure that you actually have that bit of information now in this case I can install Docker app get install docker.io or if you want you could go and install Cryo instead since cryo is a little bit more complicated. Why don't we try to do that here so you can see it? So either you would do an app get install docker.io here and then go or here's 10 steps for getting Your cryo to work this is some of the things to get cryo to work and container these a little easier So I actually chose the hard one because if you can get the hard one to work The other one should be a little easier so in this case mod probe of an overlay and a br net filter And then I want to make sure that this is also persistent So I'm going to edit a sysctl file for keep cries We see at sysctl.d 99 Kubernetes runs last and inside of this I'm going to make sure that the bridge this all about the networking that I'd be forwarding has turned on that My bridge interfaces are allowing it and paying attention to it. So here we see The three different parameters are saved to a file Of course, I want to make sure I didn't mess that up so sysctl sysctl dash dash system and you should see at the bottom there that it's applying those changes among everything else That you may have done now in this case. We now use the open suze Versions of software. So just to make life a little easier. I'm going to set So I did an export of the operating system is X Simone to 18 of 4 And you know game that will change depending on what version you're using and then what version of Kubernetes or cryo that you're planning on using the cryo gets updated in accordance You know a little bit behind when Kubernetes comes out you get a version for that so I'm going to use an echo command and I'm going to create an apt sources list for the open suze Repository so debt. This is what's going to go into your file deb download open suze org repositories development cubic lib containers stable cryo, and then I passed it version in OS You see what actually got put in there was 1.20 X and going to 18 of 4 So for your versions you can always go to download open suze or org and Explore it so if it changes and you can't find it go there And you should be able to find those resources as you look now Of course we want to be able to actually use that software so we can load the keys to it And this is also if you go to the cryo page cryo.io This is documented on that page So if I'm talking to you fast you can't quite see that we're doing cryo The main page your cryo the emboon to install has all of this information in it So I've added a repository this time this time And I added the key now my second repository for the lib containers And let's say issue with Backspace in my example, so I'm going to create a Same thing open Susie repositories develop lib containers staple for whatever my OS is and It ends up being of course X and boom to 18 oh four and I have a key for that repository as well So this to show you as a history of what I've done so far So you can kind of see it all together of course. There's one typo in there But otherwise I've just updated the system and I've made sure that I can get to my cryo Software as it gets now as it's available now that I've done that I need to let apt know that there's a new version And so it should be pulling and I should see that it's successfully pulling from cryo and lib containers That's a way of double-checking. You didn't typo like I did in the previous example Then now that I got it appears to have worked. Let's install the packages So we're gonna install cryo and cryo run C the run C version There's a little bit of disconnects between the version So you could use the mood to one, but it's not always perfect. So I want the cryo version of that software then Should be installed here pretty quick And we want to make sure it actually is running. So I'm gonna do a system CTL demon reload I'm gonna just make sure that cryo is enabled and then start it and take a look at it and Hopefully if my luck holds it will say when I look at the status of it It will say it is active and running and you can look through to see if there's anything odd here You get some you might see some errors with Validating such-and-such at this point. It's not a big problem. It's a warning of an error so at this point things are looking good and And I can continue to the next step So those steps, I just did so if I'm let's look at my history again from step two to step 18 Is to get cryo running all those steps could be replaced with app get install Docker.io So that that just to kind of give you an understanding if you chose the Docker route You could replace that with Docker or in this case the harder more cut not my harder But more steps would be to get cryo running now. We're back to both. No matter what your engine is This is the process. We need to add the repository to get access to Kubernetes software now, so I'm going to add into a Another sources list file that it's a Debian package and it has this parameter here So apt Kubernetes I owe the Xenial still and then it's a little bit behind there and then main That's just the syntax for that repository and we have another key that we want to make sure is is in our environment So we're going to curl and Find this key here Okay curl from that packages Google and pipe that to an apt add add our key to our environment says, okay That's good. And then we do another apt get update So at this point we should be able to get access to our Kubernetes software And let's go ahead and install it So if you used to app can install now that the repositories work and we want to install three different packages Cube ADM Cubelet and cube cut. So the versioning of it depends on what you want to use in this case at the end of the package names I've put a particular version So if you leave that off, you'll get the newest version. So it's at the at the moment It's one dot twenty one dot two unless dot three dropped But that's just it updates happen major updates happen every three months minor updates happen every seven to ten days So just be aware that there's the one thing constant is change So in this case, I'd like to know exactly what the version is which matches the exam at the moment So that's just something to be aware of that since there's so much change if you're not paying attention And you install the different version there might be differences in the API There might be subtle differences in commands and then when you're in the exam environment, you're like, whoa, what's this? This isn't working the way I expect it So you always want to check go to just to kind of to call it out here go to cncf.io certification cka Scroll down and use curriculum overview and the handbook and verify the version You'll also get that verification when you sign up for the exam Make sure that whatever you're using matches what that is now in this case I might be using let's say 1.20 because I want to practice updating my kernel So I'll in I'll install one version previous and then I'll upgrade my kernel and that way I get to practice that as well And I'll see what a full upgrade of major version looks like so I'm gonna go ahead and hit enter and install this software now because it's you know, I might be in an active environment and Other people are installing software and doing stuff I don't want to accidentally get into a mismatch where I've initialized a Kernel with a particular cluster with a particular version and then I end up with something different a day later when somebody does an upgrade So I'm gonna go ahead and hold to let QBDM and Cube CTL I know where it is. So somebody has to un-hold that Before they're able to to update it So if you know there who knows what they're updating if it happens where somebody just runs the command and you get a Interesting end result. So this way we're locked at this version until we go out of our way now I would probably choose. What is the Network plug-in that I want to use and I start taking a look at it I should know what my network plug-in is before I initialize my kernel You can change just about anything. It just might be very difficult to do that. So What you would do then is in this case, I'm getting from Project Calico. I chose Calico as a features. It's fairly straightforward to use. It is I think it's a good choice Also, the exam environment has some options that use Calico And we have a YAML file. Now if we take a quick look at that That YAML file, we'll use this after our cluster is initialized But one of the big problems people have when they set up their own lab environments is the IP ranges So if we go through here, we see there's lots of settings. There's lots and lots of stuff in here But let's go look for IPv4 Enter next There's a lot of them. Okay, so we see that The default pool is 192.168 So if your VMs are like in the virtual box It's if you also chose 192.168, you're gonna have lots of problems Because routing won't work So make sure that what I would suggest the easiest thing to do is change your VM network change it to something Not 192.168 And this is probably the most Common issue is that it doesn't they don't understand what's happening And so they chose an easy one 192.168 and then there's contention and weird stuff happens So easiest I would say is choose a different network range for your VMs, okay, so networking is One of the big issues, but at this point then we would Find out what our our primary name is so My host name dash i for example this guy's ip address of my vm is a 10 dot So it's a 10 dot 128. There's no contention there Between them and so now I can use it if I eventually want to do high availability I don't want my initialization script to be tied To the ip So I'd like to use a name it gives me a little more flexibility instead So i'm going to take that i'm going to add an alias to it So i'm just going to copy this and i'm going to edit my etsy host file and i'm going to insert that And we'll give it a very very original name of kate cp kate control panel But whatever you want to call it the advantage to this is if I bought if I generate or initialize my cluster high to kate cp Then if I use a exterior firewall to multi master it the certificates will still line up So that's just one of those like i'm looking forward things now. There is a config file that you can use In order if you choose docker it looks like one thing if you're choosing cryo you might need something else um, let's uh, so I have one um Where find dollar sign home minus name to the adm dash comes big dot yaml Okay, oh it's not my home directory. I'm root. I want The student the non-root. Okay, so let's do a cast of that file real quick And we can see so you guys can see what it looks like that inside of this cube adm cluster configuration a particular version kate cp. That's the alias. I used port 6443 and then this pod subnet matches calico They match each other so they'll be set up the same way if you're using cryo, which is what we're using then there is a Uh, there's a much bigger configuration file So let's do a find for that instead and that's going to be called cube dm cryo dot yaml And that of that file And so and you can you can find this you can search for a cryo config file But what you end up with is all the settings cryo needs to know like what's my node registration? What's the name of it kate cp any other parameters? certificate directories ip rages version of kubernetes dns information again 192 168 And other parameters having to do with connectivity and tls cluster dns and settings So potentially a lot. I use this just so you can see all of the things that you could use when you set up your particular Cluster, so i'm going to go ahead and Let's go ahead and just copy that so cp That file to the current directory And so I have now the the cube adm so cube adm and knit and I tell it that config on fig to use is going to be the Medium cryo all those parameters and again you you don't necessarily have to pass all of them But I wanted you to see any one of these could change and might be necessary in your environment Upload search just for to provide for later use of other Masters and I want to keep track of this output So this is going to be a cp.out file For example because there's a join statement in there that might go by and I might not see it So if I didn't typo along the way, let's see if this works Okay, so it says it's using the version I expected it and it's doing some checks And it's actually pulling down the images for those containers I mentioned during the architectural review such as api server scheduler at cd cube Cloud config cube config manager are all being pulled down and that it's it's going to start them They get hopefully I didn't typo something in my example to you guys when you're doing stuff live that always happens You know, it's something goes sideways, but But so it's it so far, you know, this is usually where if I've typoed that file This is usually where it has a problem because now it's trying to use these various containers the the the cubelet We talked about cubelet as a system ctl service cubelet is actually starting those containers for you in this case Got lucky this time and and lo and behold It says hey it worked it initialized successfully to start using your cluster. You need to run this So your you don't know where it is that cube cuddle command doesn't know where to go You have to tell it so you could give full admin capability So I'm going to do an exit statement, but I'm going to copy and paste what they tell me to do So copy this stuff over to your local directory so that you can actually use it You can also do this export which is calls it but in this case now it's persistent And when I run a command so cube ctl get node It says that not ready control plane remember I said to use some other names That's we're shifting away from that to control plane 42 seconds. So the containers are still starting stuff is still happening and we see a join command But it also tells us right here. Hey, don't forget to start your network plug-in. So if you have weave cube router romana Flannel there's options we chose to use Ctl will be a create or apply. Let's go with what it tells me to do f calico dot yaml Oh because I whoops pseudo cps some rat roots directory. I downloaded over there Okay, now I can do it. I forgot to move the file over that's all And created and it's now working. So the network plug-in that allows me to talk to any worker and get where I'm going Is now installed cube ctl get pod dash dash all dash namespaces and Some of them are pending some of them are running And now that I'm this far. Let me go ahead and join So I'm on my my worker here I'd want to run through all of those same steps that I did on the master now We're getting close to time. So I don't actually want to go through it But we'll go through the the very same steps to get your system updated to Let me show you where to stop zoom terminal. So what you would do on your worker is you would do all of the same steps so you you would get your Cryo running you would install it you would make sure it's running you would get the software and edit the etsy hosts information So up to step 25 would be done on the worker when you are Uh, but you know, you're not going to initialize it. You just get it ready And and so when you're ready you get the worker to that point it's installed cryo's running the software is there you installed your the installed your cuba dm cuba and cube ctl tools Then you use this join statement, which was at the so cuba dm join And it has a generated token a hash and that that were the worker then will join Uh, the master and you'll have two nodes you could keep joining workers be aware There's a time limit on that so if you go tomorrow you try to add a worker You have to regenerate that Those some of those tokens and the hash actually stays the same but the token changes It's I believe 24 hours is the default, but now that it's been a bit Let's see what happens qctl get node now and it says ready qctl get pod dash dash all dash namespaces And everybody's running so this is a good place to be to join the worker I think we're basically out of time, but uh Our how are we doing? What's what's the what's the plan at this point? Yeah, we can uh join the worker Okay, okay, so we're going to join the worker. Okay, so just uh, just to to make my life a little easier Let's do it this way and uh, I'll type history and So again, let's join the worker. So, uh, sui Again and the roots history So sui dash i and since since I have it here what I can do is actually sort of like This I can do a overlay Okay, so now this is that the cri.conf information So you guys can see some of these parameters here a second A second time so let me go back and just make sure I'm using the right information Okay, there it is and so that's my cri.conf information and then The system version the version 1.20 I don't see os listed there that's interesting that didn't get saved in my My history for some reason so let's let's go ahead and make sure we have that so os is set Version is set and then let's start adding the stuff So we added that we had the key for it There's my typo so then we add the other lib containers and It's interesting that the history isn't necessarily showing everything so this is uh always fun So let's add the key for lib containers okay, that's there and Um, so we have the various bits of information I've done two keys and let's test that I didn't typo something App get update and I see the cryo is listed and the lib containers is listed and it appears to actually be working Then install a cryo and runc that looks to have worked Maybe Maybe I spoke too soon. I have no idea why it's going slow. It's just my luck. I guess so Basically, we're just catching up here It knew that I was in a hurry is what's going on Yeah No, but that that was really interesting because Cryo is by far the most Would say would take more number of steps than the other ones out there whether it's container d or whether it's stalker So I think that was a very neat and very You know a very good demo based on cryo So basically whoever are watching you now have a complete installation steps So you can actually set up you just need to compute nodes get them from anywhere Just have to compute nodes and you can you know then install all the stuff that are required to set up kubernetes starting from your Take it ubuntu one because that that again resonates with the exam. So go with the ubuntu 18.04 for now and then you can install all the components kubadm And then cryo or then cat put the calico yaml file also have your cube config kubadm config yaml file and hold your kubadm kubcdl And kubelet so that it you know, it prevents the upgrades automatically So anybody who has to update they should unhaul first and then do the upgrades. It is very important And yeah, then you do just a kubadm in it, which is the magic command and that will set up the cluster Give you some commands to run on the master Then you have to set up the Networking because without that non networking components again are separate from the kubadm So you can choose as per your choice and then you will be having your joint token which will be your token to Basically a full joint command not only the token So a full joint command that you can directly run On the worker nodes and on the worker nodes also till the kubadm And you know that that particular setup is required because you have to run the kubadm join command So you need all those things already set up on the worker nodes as well And that's what tim is tim has been doing Setting up that and putting on hold and now you will be just running the You know the kubadm join command Absolutely, absolutely. So so you can see what I've done on the worker node Again, the same thing where I just made sure that the cry interface for the network was set up I set a version and a an os. I'm not sure why the second exports never showing up with my history, but Um Making sure I get to the open suze repositories one for the cryo one for lib containers Make sure that I install cryo and and cryo run c from them enable start it Add access to the kubernetes software itself from the key Updated that's he hosts I installed the software I forgot to be updated before then I installed kubadm kubelet kubectl of the same version And I made sure to hold them as well So now my worker is ready for that that join command now if I didn't see it go by let's go find that again So if I were to grab dash a four for the word join out of my cp.out file I would find it with what I need right there. So that's the join command Let's let's see if I did my worker correctly. So I said join and it's trying to connect to the Control plane and says hey, I worked. Let's just see kubectl get nodes And I'm not the student so kubectl get nodes And it's there it is. It's worker. It's not ready yet, but it's on the way So I'm going to zoom in for this and kubectl get pod dash dash all dash name spaces And we see that the calico that's the network plugin is just getting loaded onto the worker node so that it can start communicating handling the network The other various like core dns and other stuff is like proxy remember talk kubectl kubet proxy and so forth That's running and I'm guessing that just after a second here. I'm gonna try again Everybody's running. I have a cluster and kubectl get node They both show ready great Questions anything coming my way from that Awesome, uh, yeah, there have been like a couple of questions, but uh, we were going with a you know With a full flow, uh, so didn't want to break that and amazing like like I said before the best Architecture explanation and now this was the best, uh, demo for given any setup as well So, uh, and I think people are agreeing to that. So, uh, people have loved the Uh, the demo and we have a comment like this is how the demo should be. So, uh, that's that's really good Uh, so a couple of questions are there, but I think uh, we can take that like Do we need to know the knowledge of flannel and all those tools? I can read a couple more. Um, yeah, if you if you go into, um, which you may have already done So, uh, let let me share the screen if for the exam topics if you look it says you have to have some basic knowledge of The networking so of course, what does basic mean? The idea being is that you should be able to to kind of point at what you need And say do I want flannel so can I identify the differences between them? So are you seeing my browser now? Yep, now we again. Okay, awesome. So if I go into the curriculum overview, uh, and find the cka curriculum PDF it gets generated here and I scroll down and these are all the bullet points. You probably went through this before Uh, you've already made let me just get some points again Uh, take all every single bullet point here put it into a word processor underneath write the commands to do it Anywhere you see the word understand that's not what you think understand is understand means create integrate troubleshoot delete repair So anywhere you see the word understand, it's much bigger than you can recognize it. That is on a browser test So, you know, it meant they don't actually articulate, uh Cluster if you go into the uh candidate handbook, which I suggest you read That's one of the two documents that suggest you read it does mention if you look at the clusters You have several clusters available to you One or two of them running calico the other ones are running flannel flannel runs everywhere But doesn't have a lot of features. So it's um, I can't answer specific questions about what you need to know But you'll notice in this list calico isn't called out particularly flannel isn't called out particularly But connectivity host name network configuration Your name and that I can send you the 50 percent off certification vouchers So please if you are watching the stream anytime later as well Make sure you send me the send me your name and that that would be uh, you know That would be awesome And yeah, thank you tim for your time and it was a super awesome stream to be honest and people who'll be watching later Would love this. This will be up on twitch and will be later uploaded to cnc of cloud native tv playlist as well and really excited Very happy that we uh did this particular show and Yeah, follow the cloud native tv button because there are awesome shows that are lined up complete week and people do day after day so make sure you do that and This is a bi weekly show on thursdays 8 30 p.m. And we'll try to get tim again for some time Let's see on his schedule and or some other folks or it'll be by myself Alone and we'll be you know continuing some of the learnings and some of the cool demos like we did today So, uh, thank you tim anything you won't like to say no, thanks very much I encourage everybody to to practice this stuff and again Get your yaml keep practicing. Remember you only have two hours. So practice with a two hour mindset Awesome. Uh, thank you so much everyone. Uh, take care. Bye. Bye