 Hi guys, my name is Trevor Sullivan and welcome to another CBT Nuggets skill where we are going to continue learning about Kubernetes and in this particular skill, we're going to dive into the topic of networking so that you can understand how networking is applied inside of a Kubernetes environment on the nodes that are running your cluster, the master and the worker nodes, as well as how pods are able to get IP addresses and how network traffic can be distributed across the pods that are deployed into your Kubernetes cluster. So I've put together a little diagram here that is going to help us understand how some of these different things work. And I'm going to be using the Amazon Web Services cloud platform, in particular the Amazon Elastic Kubernetes Service, EKS, in order to kind of demonstrate how these concepts work. However, these concepts really are more general networking concepts and they should apply to pretty much any network environment that you're working inside of. So for starters here, we've got a cluster and the cluster as we just mentioned is made up of master nodes as well as worker nodes. And the master nodes are the nodes that are running the controller manager, the scheduler component that's responsible for placing pods onto worker nodes and things like that. And in most cloud environments like EKS or Google Kubernetes engine GKE or the Microsoft Azure Kubernetes Service, AKS, most of those major cloud vendors are going to kind of abstract those master nodes from you. So you don't really have direct access to them. You don't need to SSH into them. You don't need to install software on them. You don't need to patch them or monitor them. They basically just get spun up by the managed service from that cloud vendor. And then you can join your own worker nodes to the master nodes that are running your Kubernetes cluster here. So then we've also got these Kubernetes worker nodes and in order for all of these worker nodes to communicate with each other as well as to be able to communicate with the master nodes in order to receive instructions from them, the worker nodes are also going to need to have IP addresses on the network that they belong to. So in Amazon web services that network, that software defined network in the cloud is just known as a VPC or virtual private cloud. Now the virtual private cloud is going to have a CIDR block and the CIDR block is basically just the IP allocation for that particular virtual network. And then they'll be able to get IP addresses from that CIDR block. So in order to manage the worker nodes themselves, if you want to do any troubleshooting or monitoring or anything like that, you're of course going to need to have a network interface here with an IP address allocated to that particular node. And that's how that node is going to communicate with other devices on the network. However, your pods are also going to need to have IP addresses as well. So when you as an administrator deploy a pod out to your cluster here, that pod needs to be able to communicate with other resources on the network. And the way that that happens is through what's known as a container network interface. So when you create a pod that pod is actually going to get an IP address on the virtual network. So if we take this little network interface right here and allocate that to the pod, that pod would then have the IP address of 10.5.5.187. And so that pod is then able to communicate with other devices that are on that same network. And that pod is also able to get out to the internet as well through something known as an internet gateway. So if that pod needs to communicate with any external services out on the internet, then any of these devices actually on the network would then be able to go through this internet gateway and reach out to resources that are on the internet. That's how things kind of work from a pod networking perspective. The Kubernetes cluster has the container network interface known as the Amazon VPC CNI. And that's what allows your pods to get IP addresses on the network. Now the Kubernetes cluster as well has something known as a cluster network. And the cluster network is essentially a cider block of IP addresses that pods can obtain off of the cluster network instead of directly on the VPC or software defined network using the AWS VPC CNI. And so in a standard Kubernetes cluster, if you were to just spin up your own cluster using a CUBE ADM for example, then your pods would actually get IP addresses on this cluster network instead. And that's basically an internal network to the Kubernetes cluster that is going to dole out IP addresses to the containers that are being spun up by these pods. Now if you have a service that's made up of many different pods, so maybe your service has more than one pod, then what you're going to do is actually create what's known as a service controller. And the service controller itself is actually going to get an IP address off of this cluster network here. So if we were to create a service controller, it would get this internal IP address on this cluster network here. And then the service controller would actually point to the pods that actually have their own network interfaces over here on the Amazon VPC. Now what's nice about the service controller is that it acts as sort of a load balancer. And the service controller will basically allow you to load balance traffic across multiple pods here. If I was to actually just take this little box here and kind of put it behind these pods, the service controller allows me to point to more than one pod. And this is a really useful concept because if something were to happen to one of these pods, where maybe the pod, the application that's running inside of a container were to crash, then the service controller could actually route traffic to the other good pods that are members of that particular service controller. So the service controller is going to basically expose itself on the cluster network, but it's going to point to pods that are actually getting IP addresses on your software defined network in the Amazon VPC. So that kind of presents us with a new challenge, which is how do we get the cluster services exposed to the outside. And we have a couple of options for that. And one of those options is to use something known as a cloud load balancer. So the service controller itself is going to load balance across multiple pods, but that's only internal traffic to the cluster. If we actually want to load balance traffic and expose it externally, then we need to create a cloud load balancer and then put that in front of the service controller. I'm actually going to bring that that load balancer right down over here. And that load balancer is going to point to our service controller. And then if any of the pods were to die inside of the service controller, then the load balancer will simply be able to route traffic to the good pods that are still up and running inside of this service controller that we have here. So the service controller can either expose itself on the cluster network, or it can expose itself through a load balancer that is provisioned into your AWS account or whatever cloud platform you're using. It's basically a separate cloud resource that gets provisioned. And then it will be configured to forward traffic from the internet over to your service controller. That's kind of how things work from the service controller perspective. Now, in addition to exposing a service controller directly on the cluster network internally, as well as exposing it through an external cloud load balancer, there's actually one other technique that you can use to expose a service. So let's get rid of our load balancer here for now. And basically the other option that you have for exposing this service into your environment is to do something called a node port exposure. And so what that would basically do is it would take a port on all of your worker nodes here, and it will forward traffic to the pods that are supporting that service controller. So say for example that we have an application like nginx, basically just a web server that's running inside of these pods on, let's say port 80. Let me just get a copy of this text box here and we'll say port 80. And so we've got on both of these pods here, we have nginx listening on port 80. And if we wanted to expose that to our network environment here, what we can do is on the service controller here, we can create something called a node port configuration. And that will take traffic from a particular port on all of our worker nodes in our cluster, and it will forward that traffic to port 80 running inside of these pods here. So for example, on our worker nodes over here, maybe we want to map port, I don't know, 34,500 to port 80. So what we'll do is basically take that port and expose it through the service controller on all of the network interfaces that our nodes have. And then we could actually hit any one of these nodes in our cluster on port 34,500. And that would then forward the traffic over to port 80, running in these pods here. So those are the kind of the three options that you really have, you have exposing on the cluster IP internally, which is good for any services that don't need to be exposed outside of the cluster, that could be hold true for something like a database service, if you're running a database server inside of a pod, and only a limited number of services need access to that database, you could run that database as a service and expose it on the cluster network, but not expose it through a load balancer, or through a node port. Whereas if you're running an application that does need to be accessed by external users, then you'll probably want to use either the node port or use an external load balancer to expose that service to your users, depending on where they're coming in from. If they're coming in from directly across the internet, you'll almost certainly want to use the load balancer. But if you have other internal services that are maybe running side by side with your Kubernetes cluster on the same software defined network in the cloud, then you might be able to actually get away with using a node port exposure that way. So that's kind of how things work from a conceptual level anyways. So we've got our pods getting network interfaces right here on our VPC. We've got the options to expose our service through node ports, load balancers, and on the cluster network up here. And then basically, we can explore how these actually work in practice by creating some pods, creating a service, and then exposing that service using some of these different techniques. I hope this has been informative for you. And I'd like to thank you for viewing. Hey guys, and welcome back. Let's go ahead and jump into some practical examples of how these different exposures of services work on a Kubernetes cluster here. So the first thing that we're going to want to do is to actually create a cluster. And I'm going to be using Amazon Web Services. But if you want to use another cloud vendor, feel free to follow along. So what I'm going to do for starters is to just fire up a browser here. And there is a great utility that you can use if you are using Amazon Web Services called EKS CTL. This is just a really nice command line utility that will easily allow you to spin up a EKS cluster. And so you can just run the simple command to spin up a cluster. So go ahead and get this utility installed on your system and make sure that you've got your AWS credentials configured so that you have access to deploy the EKS cluster into your environment. So I've got EKS CTL already installed on my system here. So I'm just going to run EKS CTL here really quick. And we'll just run a create cluster command. So we'll do create cluster. And then I would generally recommend using the spot parameter if you're using if you want to save money on your hourly cost for your nodes that are joined to your cluster. It's just a less expensive way to obtain compute capacity. And then I need to specify my credentials profile name. I have a credentials profile called CBT in my AWS credentials file. So I'll go ahead and just use that. And then if you were to hit enter that would just go ahead and deploy the cluster. It does take about maybe 30 to 45 minutes to deploy. So I've actually already provisioned one. So if I do EKS CTL list cluster, or I think it's get cluster, actually, we should be able to see my list of clusters. Of course, I need to plug in my credentials profile here. And you can see I've got a cluster here called floral painting. So that's the cluster that I'm going to be using in the remainder of this skill. So let's go ahead and run kube cuddle version just to make sure that we are able to connect to our environment here. And it looks like that is up and running just fine because I've got my server version right here. And so now what we want to do is maybe spin up a couple of pods. So if I run kube cuddle, let's say create a namespace to work inside of for now, I'll call this networking. And then I'm going to go ahead and just do kube ET or series sorry kube NS and change into the kube networking namespace. And that way any pods that I create or any service controllers that I create will go into that namespace. And you can always run kube cuddle get NS as well to list all of the namespaces on your cluster here. Of course, the only one that I've personally defined here is this networking namespace that I just created. So let's go ahead and spin up a pod running at just a simple engine x web server for starters. So we'll do kube cuddle. And then we'll do run. And we'll say image equals engine x. And of course we want to make sure that it's exposed on port 80. We'll do port 80. And we need to give it a name. So let's do engine x one. And let's see if that works. Alright, so we've spun up a new pod here with engine x. And we'll just kind of monitor it to make sure it goes up and running successfully. So we'll do kube cuddle describe pod engine x one. And in a moment here, you should see in the logs that it starts up after it downloads the container image from the Docker hub. I think my cluster already had that cached. So now we've got our engine x service exposed on port 80 here. And if we take a look at the output from the describe command, you can see the pod does actually have an IP address that's on my Amazon VPC software defined network in the cloud. And the way that the pod was able to get an IP address on the VPC directly, rather than getting a cluster IP is because it uses the Amazon VPC container network interface or CNI. And that is what allows the Kubernetes cluster to allocate an IP address from the VPC directly to the pod. So now what we can do is to spin up a service controller that points to this particular pod. Now at the moment, I don't really have any custom labels assigned to that pod that I could use as selectors, it automatically assigned this run label here. But I'm just going to go ahead and set a label on that pod so that when we create a service, we have a label that we can point to. Let me run kube cuddle set. And then we are going to set a label actually I want the label command, the kube cuddle label, and I'm going to label the pod named engine x one. And we need to give it a label that we want to apply. Let's take a look at the help for this just to better understand the syntax here. And so what we can do is just provide a label name and value. So we'll go ahead and give it a name like name equals my app. And so any pods that have the label of name set to my app are going to be used to target are from our service controller. Let's go ahead and spin up a second pod as well just so that we have two different pods to load balance our traffic across. So I'm going to just hit up arrow a few times and then we'll create another pod called engine x to using the same container image and we'll also expose it on port 80. So let's do kube cuddle describe on engine x to. And so now we should have two different pods running whoops, let me do pod slash engine x to there. And so that should have its own separate IP address. Sure enough goes done port 80 and it's up and running. So let's actually create the service now and I'm going to use the kube cuddle command line utility to create that. So let's search for kube cuddle docks. And there is a really good reference document out here that talks about all of the different kube cuddle commands. And in this case what we're going to want to do is to create a service using a node port. So if we take a look at the create sub command here and then go down to service node port, this shows you the syntax to create a node port based service. So you don't actually do kube cuddle create service, you actually just do kube cuddle create node port and that will actually create a service resource of the type node port. And then the options that we have here are to create a port mapping that will specify a port on the hosts that we want to expose. And then the target port is going to be the port of the application that's actually running inside of our pods, which of course in this case is going to be TCP port 80 because that's what engine x is listening on. So let's try to do a kube cuddle create node port and then of course we need to give the service itself a name as well. So I'm going to head back over to my terminal here. We'll do kube cuddle create node port and then we'll give it a name. Let's call it my engine x app. And then I need to give it a TCP port mapping. So I'll do your dash dash TCP equals and then we'll pick a random port on the outside. Let's say 35,321. And then we'll do a colon and plug in port 80, because that's where engine x is listening on for both of our pods. And we also need to make sure that we are able to select those pods using a selector. And we'll do that in a second step. So let's go ahead and just hit enter to run this command. And it looks like it doesn't like it for some reason. So let's take a look at the options here and see what's going on. I'm going to switch over to the terminal here and just tack on dash dash help. So let me just do create node port dash dash help. And then let's take a look at the options that we have here. And I don't see TCP. So that might be an old version of the documentation. So it looks like the Kubernetes documentation for kubectl is actually a bit outdated. Because when I run kubectl create in my current version of the kubectl command line utility, you can actually see that we don't have node port here. So they must have changed the utility to actually use create service instead. So the create node port must be for an older version of the kubectl command line utility. So let's do kubectl create service. And then it looks like we have node port as a child context of service, which actually makes a lot of sense because when you create a service, you have these different options like cluster IP load balancer and node port. So let's do create service node port. And it's asking for a name. So let's call it my engine x app. And then we'll do dash dash help. And there, sure enough, now we have the access to that TCP option. So let's go ahead and do TCP equals 35,000 321 and map that to port 80. Alright, so now we've got the service itself created. So let's do a kubectl describe service my engine x app. And just see what that resource looks like on our cluster. And sure enough, you can see that it is up and running here. And it did get an IP address on the cluster network. If you remember from the diagram that we were looking at previously, that is going to be on this cluster network right up here. And that's only internal to the cluster. So the service controller itself does not get an IP address on the VPC directly. And what's what we're doing in this case is we're actually doing a node port exposure. So we're going to take a port on each of the worker nodes, and the service controller will route that traffic over to the pods. So it's kind of a little bit round about if you have a client that's coming into one of these nodes, basically going to forward it over to the service controller on that internal cluster network. And then the service controller is actually going to forward the traffic over to one of the pods. And the pods remember do have network interfaces on the VPC directly. So it seems a little bit round about, but that's kind of how you can handle load balancing your pods in this manner with the service controller. Let's come back over to our terminal here. And as you can see port 35 321 is being forwarded to port 80. So now what we need to do is actually update the selector on this. So we've created the service, but the default selector is actually just using the same name as the service controller itself. So what we need to do is actually change this selector here to point to the selector that we applied to our pods. And if you recall, we assigned a label called name equals my app to pod number one. And I think we actually forgot to do that on pod number two. So I'm just going to run that just to make sure that both of our pods have the correct label here. So what we need to do is edit our service controller to have name equals my app as the selector, rather than having an app key equal my engine X app. So what we can do to do that is do kubectl set. And then we can do set selector. Let's do set selector and take a look at the help for that. And what we're going to want to do is set the selector on the service. So up to plug in the service name. But then we are going to basically plug in our key value pairs here afterward. And so that's how we're going to assign them to the service will do set selector on service my engine X app. And if we run that it's going to of course ask us for the resource. Actually, we were already providing the resource here. So what we need to do is just go ahead and plug in the name equals. What was it engine X? Let's go find that again. Oh, name equals my app. Let's go ahead and plug that in. And now you can see we've added that selector. So let's do a describe on our service controller again. And now you can see that the selector has been updated to say name equals my app, which is the correct selector for our pods. So at this point, we should be able to hit the service controller and validate that it is load balancing our traffic to our pods. So you might think, okay, well, I want to hit this cluster IP from the service controller. However, there's a problem with that we can't actually hit this internal cluster IP address, because we are not running a pod on the inside of that cluster right now. So in order for us to actually test this out, what we need to do is spin up a host inside of the VPC that can sit side by side with the actual worker nodes here. And remember, the worker nodes themselves are going to be listening on that outside port. And then they're going to be responsible for forwarding the traffic over to the service controllers internal cluster IP address. So we have no way to route to this cider block from outside. So what I'm actually going to do is to head over to my browser, and I'm going to deploy a virtual machine running Linux side by side with the devices that are in my cluster. So we'll do a launch new instance here, and then find Ubuntu Linux. And we'll just choose a relatively small machine type here. And we'll choose spot. And we want to make sure that we're deploying into the correct VPC that's going to be side by side with the worker nodes. I'm going to choose my floral painting here. And then I also need to make sure that I choose a public subnet for it so that we can remotely SSH into that virtual machine. So we can test out our connectivity to our internal service. So let's choose a security group that has inbound access on port 22. So we can SSH into this VM. And then we'll go ahead and launch it. And then we'll get SSH into it so we can test out that connectivity. So let's find its IP address over here in the EC2 console. And it looks like it's this one right here. We'll grab this IP address and then we'll fire up a new tab in our terminal. We'll do SSH Ubuntu. And we'll hop onto that IP. And then at this point, what we're going to do from this machine is install a utility called HTTP IE. So I'm going to do sudo apt update and sudo apt install HTTP IE dash dash yes. And this is just a handy utility to be able to test out HTTP connectivity to different services. So what we're going to do is use this utility to actually ping the service on one of the worker nodes that's in our cluster here. So the first thing we need to do is actually find out which nodes are worker nodes. And then we need to find the IP address of those worker nodes and pick one and then hit our service controller on port 35321, which is what we exposed the service on. So I'm going to head over to the management console here again. And as you can see, I've got two worker nodes here that are joined to this floral painting cluster. And I'm going to find the private IP address for one of those nodes. And it looks like they actually have two different IPs assigned each for here. And so I'm just going to grab one of those private IPv4 addresses of those nodes. And then we'll run HTTP. And we'll hit that IP address. And we want to hit it on port 35321, which I believe is the port we exposed it on. Sure enough, 35321. So let's go ahead and run that command. And it should connect, although it looks like we're having some failures here. And that's most likely due to the security groups that are associated with my cluster nodes. So what I need to do is find the cluster nodes go to the security group or the cluster nodes. And I need to make sure that I'm allowing inbound access from this particular node that I just spun up to test from. So I've actually got a rule here that is allowing inbound access from anywhere. I'm not quite sure why that's not working at the moment. So let's see what's going on here. And I can actually see here that even though we chose port 35321, it looks like the actual node port that was assigned down here is actually 31605. So let's go ahead and try that port instead. So we'll do 31605. And as you can see, sure enough, that is giving us our traffic here. So if we were to just hit up arrow and continue to run this command, we're basically going to be hitting both of the pods inside of our environment. So I could actually come over here and destroy one of those pods. Let's do like kubectl delete pod nginx1. And that'll just tear down one of those pods that we just created. And now if I come back over to this test virtual machine, you can see that the service controller is still successfully routing traffic to the second pod that's running the web application. So that's how we can route traffic using a node port to multiple pods with a service controller. I hope this has been informative for you and I'd like to thank you for viewing. Hi guys and welcome back. So in the last video, we took a look at how to create a service controller that routes traffic to pods. And we exposed that service controller on a node port. And then we were able to hit our nodes on that external port and have the service controller forward traffic to the pods. Now, another option that you have for the service controller is to simply create a service on an internal cluster IP address. So if we refer back to our diagram right over here, you can see we've got this cluster internal network right up here. And instead of exposing it through this node port, we can take this service controller and just basically give it an IP address on the cluster network. And the only way that we'll be able to then access that service is from another application that is running internally to the Kubernetes cluster. So let's take a look at how that could work. So for starters, I'm going to go ahead and spin up a service. And I'm going to have that service basically forward over to our nginx pods here. So we'll be using the same selector. However, the difference in this case is that rather than using the node port service type, we're actually just going to give it the cluster IP type instead. And that will not expose it using the node ports that we had previously used. So instead, what we're going to have to do in order to access this service and kind of test out connectivity to it is to spin up another pod running some kind of interactive application. So, you know, we'll bring a maybe a terminal pod up here, maybe have something like a PowerShell shell that we can use to kind of ping the service internally. And so that pod will be able to communicate with the cluster internal network. So that's the only way that that service will be exposed in this particular case. All right, so let's head back over to our terminal here. And I'm going to close down my little test test VM over here. And what we're going to do is spin up a pod, so we'll do kubectl create pod, or actually we'll do kubectl run, sorry, and we'll do image equals nginx. And we'll expose it on port 80. And of course, we need to give it a name as well. Let's call this cluster IP pod. And then we need to create a service controller that's going to forward traffic to this pod. But we also need to give the pod a label. So let's run back to our label command here. And we'll fix our pod name to be cluster IP pod. And then let's call this maybe just type equals cluster IP. And then when we create a service controller, we'll configure it to use this label of type equals cluster IP as its selector, so that it can forward traffic to the correct pods. So we'll do a kubectl get service. And I'm just going to kill off this service that we had before just to keep things clean. I'm going to delete the my nginx app service. That's the one that was using the node port configuration over here. Of course, I need to prefix that with the resource type. And now we'll just verify that it's gone. And all right, now we can create a new service. So we'll do a kubectl create service. And in this case, instead of using the node port context, we're going to be using the cluster IP context. And then let's take a look at the help for that. And of course, we're going to need to give the service a name. And as far as the input parameters go down here, the only one that we would really be interested in potentially would be the cluster IP here, because this allows you to actually statically specify an IP address if you want to if you want to set it to one that you already know. Otherwise, Kubernetes is just going to assign a random IP address from the cluster cider block to that particular service. And then of course, we'll be using the TCP port down here to forward traffic to our pod on the correct port. So I'm going to go ahead and just specify a name for it. Let's call it cluster IP service. And then I'm going to specify the TCP parameter. And we'll say forward traffic from port 80 to port 80. So our service controller will be basically be listening on port 80 on its internal cluster IP, and it will forward traffic to our pods or pod containers on port 80 as well. All right, so we'll create that. And then we also need to edit the selector for it. So we'll do kubectl describe service cluster IP service, because the default selector is probably going to be based on the name. Sure enough, it's just set to app equals cluster IP service. So we needed to change that selector label to use the one that we assigned to our pod. And so that's just type equals cluster IP. All right, so let's do kubectl. And we'll do set selector. And then we need to specify the name of the service will do service slash cluster IP service. And then we'll specify our type equals cluster IP label. All right, so now it's been updated. And we'll just do a describe on it to make sure that it has been updated correctly. Sure enough, you can see selector is type equals cluster IP. So that matches our pod label. If we were to do kubectl describe pod cluster IP, I think did we call it? What was the name of that? Let's just hit up arrow here a few times. Oh, it's cluster IP pod. And we should see that label applied on it. Sure enough, there it is. All right, so that service controller should be able to route traffic at this point. What we want to do is basically hit this cluster IP address for the service controller on port 80. And then that service controller is going to forward to the configured endpoints, which are of course dynamically selected based on the selector. So one of the things that we'll have to actually do is spin up a pod that is on the cluster network so that it has access to hit this service. So let's run a kubectl run. And then we'll do a standard in and TTY so that we get an interactive shell. And I'm going to use the image mcr dot Microsoft comm slash PowerShell. So I have a PowerShell environment to work inside of. And then we'll give this pod a name. So we'll maybe give it a name like pwsh. And this should allow us to attach to that container because we did standard in and TTY. When you use those options together, it basically allows you to have an interactive shell on whatever pod you are spinning up. All right, so for some reason, kubectl always does this weird thing where it feeds in a bunch of random text into your shell, but you can just ignore that. So now what I need to do is install a utility. Actually, I don't really need to do that. I think I can just use the invoke web request command here. Let's do PowerShell's invoke web request. And then it will do the URI parameter here. And our URI is of course just going to be HTTP followed by the IP address of the service controller. And of course it'll be on port 80. And so let's try that out. And sure enough, you can see that we are able to hit this service. Now, if I was to try to hit this service, hit this cluster IP from outside the cluster, then you'll see that we are not able to hit it. I'm going to go back to the other virtual machine that I spun up earlier. Let me do an SSH session here. And remember that this virtual machine that I spun up is actually sitting side by side with our worker nodes. Though it's on the same VPC as the worker nodes, but it's not a member of the cluster like the worker nodes are. So if I was to try to use that same URL and hit it from this virtual machine running Ubuntu Linux, you'll see that we are not able to route traffic because that cluster IP address that the service controller has is sitting on that internal cluster network rather than being exposed on the Amazon VPC. And because we don't have a node port configured, we can't even hit the worker nodes on the outside and then have the worker nodes forward the traffic to the cluster network, like what we saw in the previous video, because we're not using the node port type. We're only exposing the service on the cluster internal network. So that's one other option that you have for exposing your Kubernetes services or services that don't need external access. I hope this has been informative for you and I'd like to thank you for viewing. Hey guys and welcome back. So in the last video, what we did is we took our service controller with one or two pods and we configured it to be exposed exclusively on the internal cluster network that you configure when you deploy a Kubernetes cluster. So now what we want to do is to take a look at the last option that we have for exposing services. And of course that's exposing services through what's known as a load balancer. Now the load balancer itself is actually going to belong to your VPC. So even though I don't really have enough space to kind of show that in this particular diagram, the load balancer does belong to this VPC and it'll basically sit side by side with the pod network interfaces. And the service controller that we create of type load balancer is basically going to take these network interfaces that the pods are connected to and it's going to expose those as targets for the load balancer over here. Let me actually move the load balancer over to the right hand side here. And so these network interfaces would basically get registered with this load balancer and then anybody who wants to access the service that's running on these pods here, thanks to this service controller will then have to route their traffic through the load balancer. And then the load balancer will pick one of those targets behind it to route the traffic to now for starters, before we get too far ahead, if we were to come over to the EC2 console here and drill down to load balancers under the load balancing section here, you should see that we do not have any load balancers configured for this particular VPC. So we do have a couple load balancers here but these belong to VPCs 0A0 and 096. But if I was to take a look at my VPC console, the VPC ID of my floral painting is actually 0D5. So the other load balancers actually belong to these other kind of orphaned clusters that I've used previously. However, my current VPC 0D5 does not have any load balancers allocated to it. And so basically what's going to happen is when we create the service controller, it's going to dynamically provision a load balancer on our VPC 0D5 and that will allow us to forward traffic through that load balancer in order to access the service running in our pods. So let's go ahead and do that now. So I'm going to come back to my terminal here. Let me close my little test VM over here. And then I'm going to exit out of this PowerShell test pod that we were using previously. And let's just do a quick kubectl get all just to see what we currently have in our networking namespace. And I think we should have an orphaned service here that we used from the last video. And we do have a few pods up here. So I'm just going to say kubectl delete pod cluster IP pod and pod engine x2. And the PowerShell pod that we don't need any longer as well. Let's do delete pod engine x2 and pod PowerShell and service cluster IP service. So that should bring us back to more or less a clean slate. I think I could do kubectl delete all as well. I think there's a shortcut to actually just kind of clean out all your resources. But now what we need to do is go ahead and spin up a new pod for this use case. Let me just do a quick kubectl get all make sure everything is cleared out here. All right. There's nothing there. And we'll do a kubectl run. And we'll use the engine x image again. We'll expose it on port 80. And we'll give it a name like pod engine x1. And let's also give it a label. We'll say label equals let's do type equals load balancer. And of course, the correct parameter name is actually labels. So let's fix that put a s on there. And it looks like it doesn't like the name here. That's because I prefixed it with the resource type. So let's go ahead and spin that up. And then I'll spin up a second one. And maybe just for fun, we'll do a third one as well. So now what we're going to do is create another service. And we're of course going to use the selector of type equals load balancer. However, this time, what we're going to do differently is we're going to create a service using the type of load balancer. So let's do load balancer. And of course, we need to give that service a name as well. So let's call it my lb service. And of course, we need to plug in our port forwarding rule as well. We'll do TCP equals, let's say port 36,010. And we'll forward to port 80 on our pods. And if we do a kubectl describe service or SVC for short, you can do as an abbreviation. We do need to update our selector to use the type equals load balancer. So let's do that really quick. We'll do kubectl set selector service slash my lb service. And we'll do type equals load balancer. And then we'll go ahead and just verify that really quickly. And make sure that it's been updated. And sure enough, it has. And so at this point, you can see that within a matter of seconds, the service controller has detected that there are three matching endpoints based on the selector that we chose here. So now what we can do is hit that load balancer from the outside. But before we do that, let's come back to the EC2 console here and just do a quick refresh. And you'll see that we have a load balancer that was created today, Valentine's Day, February 14. And if we take a look at the details for that load balancer, you'll see that it has an external DNS name. Now, if you'd like to access your application on a different DNS name, you can go to your DNS provider and you can create a CNAME record, which is short for canonical name. And you can basically configure that canonical name record to forward traffic to this DNS name as the target. So that's how you can kind of rename your service rather than having to publish this bizarre auto generated DNS name. So once that load balancer becomes active, which can take a little bit of time, we should be able to hit this service by plugging in this DNS address. However, keep in mind that we chose a somewhat random port for the outside of our load balancer. And it looks like it actually allocated port 31865 as the node port. However, the actual port that we'll hit the note the load balancer on is going to be 36,010. So let's go ahead and open up our browser. And then we'll plug in the DNS name followed by port 36010. And sure enough, you can see that after just a couple of minutes here, once that load balancer is active, the load balancer is actually forwarding traffic over to our pods successfully. And so at this point, because the service controller is actually rotating across these different endpoints right here, if we were to kill off one of those endpoints, our service would continue to run. So let me do a kubectl delete pod, and we'll just delete engine x2 kind of randomly. And then if we just hit up arrow a couple of times and do another describe on it, you'll see that the service controller immediately became aware that one of those pods has now disappeared. And so the endpoints have now been updated to exclude the pod that has now been deleted. And so if we come back to our browser and just continually hit control shift R to force a refresh, you can see that our service is still up and running. Now the only way that we could really kill things off is to actually kill all of the pods that are backing that service. So if I kill off pods, engine x1 and three, and then come back to the load balancer, you'll see that it is going to fail. Because even though the service controller is still technically up and running, and the load balancer of course itself is up and running as well, there are no actual pods running behind that load balancer to serve up any kind of traffic. And you can see if we describe the service, sure enough under endpoints here, there are none. There are no endpoints that the service controller could send traffic to. So we could very easily remedy that situation by simply creating a pod with the appropriate label that matches the selector on the service. And if we come back over to our browser, now you can see that we are immediately able to make requests back to that service. So that's how the load balancer service type works in Kubernetes. I hope this has been informative for you and I'd like to thank you for viewing. Hi guys, my name is Trevor Sullivan and welcome to another CBT Nugget skill on Kubernetes. Now in order to understand how Kubernetes works from a DNS perspective, we are going to be exploring the core DNS project. Now to give you a little brief backstory, in earlier versions of Kubernetes, the default DNS server that was deployed into the cluster was known as kube DNS. And kube DNS has since been replaced by another default DNS server component. And that is the core DNS open source project. So core DNS is an open source project of its own that is supported by some companies out there in the networking space such as info blocks. I know that they have employees that are contributing to this particular project. And core DNS basically runs as a couple of pods on your Kubernetes cluster. And it allows your application pods to resolve DNS from the Kubernetes cluster network. Now your pods are going to oftentimes need to be able to resolve external names outside of the cluster. So think about, you know, hitting Google.com or GitHub.com or anything like that. Those external DNS names need to be resolvable by your application pods in most cases, especially if you're dependent on external SAS APIs, for example. However, your pods are also going to need to be able to access and resolve DNS names for internal services as well. So let's kind of take a look at a diagram here that'll help us to understand what our needs are here. So in your typical Kubernetes cluster, you're going to have your master nodes and your worker nodes. But then within the cluster network itself, so you're going to have a kind of a large cider block that defines what your cluster network is. And this is where your pods, your application pods that you deploy out to your Kubernetes cluster, this is the cider block where they are going to obtain IP addresses from. So when the Kubernetes scheduler schedules a pod to go run on one of your worker nodes, then the worker node is going to get an IP address from the cider block and it'll associate it with each pod within your application that you're deploying. So each of the pods that you deploy get their own separate IP address here. And in order to actually resolve DNS names, these pods that are running your application code are going to need some kind of DNS server to communicate with. Now, theoretically, you could configure them to communicate with a DNS server that's outside of the cluster. You could use maybe a root server out on the internet, like Google's Quad 8 or 8, 8, 4, 4, or even Cloudflare's Quad 1s, the 1.1.1.1 DNS server as well. However, if your pods need to resolve internal services inside the cluster as well, then you're going to need an internal DNS server to the cluster. And then if the pods need to resolve anything that's external to the cluster, you can configure your core DNS pods to forward those requests for external DNS names out to some other DNS server. So imagine that you've got, you know, a few application pods here and let's say that these application pods are just running some typical application, maybe a web server or something like that, or maybe it's a API tier. And those web server or API application pods are going to need to communicate with some kind of data store. So that could be maybe object storage, it could be a relational database, it could be a no SQL database, a graph database, all sorts of different types of backing stores that you could potentially use on a web application. But for the sake of example, let's say that we've just got something simple like a MySQL database. And so you're going to have this MySQL pod that's out running on your cluster here. And then typically, even if you just have a singular pod, you're also going to want to put that behind what's known as a service controller. You could create a service and give it a name like MySQL DB, for example, we will just use all lowercase for that. And so you've got this pod called MySQL, and you're exposing it through a service here. And so in order for these application pods that are running your web application, in order to discover this service called MySQL DB, they are going to need to be able to make an internal DNS request out to the cluster. And that's where core DNS comes into play. So when you create this service controller that forwards traffic over to your standalone MySQL pod, this service controller will actually get an IP address of its own, and it will actually register itself with core DNS. And so core DNS is going to allow the application pods that are connected to your Kubernetes cluster network to basically resolve this name MySQL DB. So rather than having to, you know, figure out what the IP address of this MySQL pod is here, and then maybe configure that as an environment variable on all of these pods here, you can simply create the pod, create the service, and then basically just configure all these application pods to point to this more user friendly name, which is MySQL DB, instead of having to configure an IP address statically on all of these application pods. And so what's really nice about that is that you have some consistency here. So whenever these application pods request MySQL DB, they're just going to get the IP address that the service has, rather than the individual pod. And so if for some reason, some problem occurred with this MySQL pod over here, so maybe that pod dies, and then you replace it with an entirely new MySQL pod that is also configured with the same labels, so that the network traffic is forwarded from the service to that pod. Well, rather than having to reconfigure all of these application pods with the new IP address of the replacement MySQL pod, your application pods can just continue to run with no configuration changes, because they're just going to be looking for this service DNS name here, instead of that static IP address. So this is a really good way to set up service to service communications inside your Kubernetes cluster. Maybe you've got these application pods that are exposed behind a service of their own, right? So let's say that these three pods all belong to part of the same service. And that's like called, you know, my web API, for example. And so this service controller here will now be responsible for forwarding traffic to all three of these application pods. So now let's say that you have some other application, right, some kind of app to pod, because I'm not feeling terribly creative right now. Let's say you've got a pod named app to and this is some other client application, maybe it's a CLI tool that you're using interactively with kubectl run or kubectl exec, perhaps. And let's say that this pod needs to communicate with these application pods over here. Well, you don't want to have to take all three of these pods and discover what their IP addresses are, and then plug those IP addresses into this app to pod. It's actually a lot easier if we could just tell this app to pod that it wants to communicate with the service named web API. And so anytime that this application running inside of app to pod is going to request communication with web API, the service load balancer here internal to the cluster is going to figure out which of these three pods within the application itself it wants to forward traffic to. And what's really nice about this is that if one of these pods were to die off, well, there's still two healthy pods running here. And so app to pod will be able to continue to operate without any kind of service interruption, because rather than communicating directly with a specific pod that could potentially die. Instead, we're pointing it over to the service with a DNS name. And that DNS name will then kind of forward traffic two different pods that are behind that service. So having a functional DNS mechanism internally to your cluster is incredibly important so that you can ensure continuity of network communications between different services in your cluster, whether it's client application communicating with an API, or whether it's an application or an API that needs to communicate with some kind of database service over here, that's also running on the cluster. And of course, I kind of forgot to connect this MySQL pod here to the network, but of course, every pod gets an IP address on the network. So that's generally commonly understood. In any case, this is kind of why core DNS is important. Core DNS is deployed automatically on most clusters. If you're provisioning using cube ADM, if you are provisioning using a cloud service like Amazon elastic Kubernetes service as well, then they will automatically spin up core DNS for you as well. And there's lots of other managed Kubernetes services out there from cloud providers like Azure and Google Cloud or Google Kubernetes engine. There's Digital Lotion, there's Linode, there's Vulture, and a whole bunch of other vendors out there that are starting to offer managed Kubernetes services. And as part of the cluster initialization or setup process, they'll automatically create these core DNS pods for you. So typically what you're going to see happen in a fresh Kubernetes cluster configuration is connected to your cluster network, there's going to be two different core DNS pods. And these pods in particular are going to be typically deployed into the namespace. So let's put namespace and then cube system. And so these core DNS pods, along with the replica set and the deployment controller are all going to be deployed into this namespace called cube system. And basically that's going to run DNS for your internal network on your Kubernetes cluster here, so that all of your pods are able to make these DNS resolutions between different services, as well as making external DNS requests out to other DNS servers out on the internet as well. So if your application pod needs to make a request out to the GitHub REST APIs, or maybe out to GitLab or Azure or AWS REST APIs, then the core DNS pods are going to allow the application pods to make those external DNS requests, send them the IP address, and then the pod the application running in that pod will then make the outbound IP, TCP or UDP connection whatever is appropriate there in order to connect to that remote service. Now one other thing that you're going to find in this cube system namespace, you just kind of move this text down a little bit to make it more readable here, what you're going to see in this cube system namespace is something known as a config map. And the config map is going to basically be responsible for specifying the configuration for your core DNS pods. So the core DNS deployment here, this is a deployment controller in Kubernetes, which is a primitive object in the Kubernetes API. And the deployment controller is responsible for basically establishing a certain number of replicas for whatever pod spec that you give it. And the deployment itself doesn't actually create the pods directly. But what it does is it creates dynamically a replica set with kind of a randomized name that's based on the deployment name. And then the replica set itself is going to then be responsible for provisioning the core DNS pods. However, the configuration for these core DNS pods is actually going to come from a resource known as a config map in Kubernetes. And that's basically just kind of a set of key value pairs. It's very similar to a secret resource where you can have a secret key and a secret value. And then you can simply refer to those values inside of your Kubernetes pods. So this config map that gets created for core DNS by default, when you provision your Kubernetes cluster is actually going to get attached over to each of your core DNS pods, forgive the kind of confusing lines here, but this config map will get mounted into these pods here. And then the core DNS pod when core DNS itself, the actual binary gets created inside of a container as part of that pod scheduling process, then it's going to read the configuration out of this config map. And so if you wanted to make any changes to your core DNS configuration, then you would do that right here on this config map object. So you don't have to worry about, you know, creating a file on a file system and then mounting that persistent volume into a pod. It's actually a lot more convenient because you can actually just edit this config map resource on your Kubernetes cluster. And then you can make any updates to the core DNS configuration that you want to. And then from there, you can basically redeploy these core DNS pods by using a rollout command on the deployment object or the deployment resource. And then once you do a rollout command on the core DNS deployment, it'll go ahead and create a new replica set and it will redeploy the core DNS pods so that they can obtain any changes that you made to this config map resource right here. So in the rest of the skill, what we're going to do is kind of explore this architecture a little bit here and just kind of understand what the cluster looks like when we provision it. We'll take a look at the resources that get created in the cube system namespace here, like the deployment, your replica set, your core DNS pods, your config map, and we'll also take a look at some of the different core DNS plugins just at a high level to help you understand exactly how core DNS works. I hope this has been informative for you and I'd like to thank you for viewing. Hi guys and welcome back. So as we discussed in the previous video, the core DNS component is deployed onto your cluster as a config map, a deployment, a replica set and a couple of pods here. Those pods get connected to your cluster network so that your different application pods and services can communicate with the core DNS pods. Now the configuration of the core DNS component itself is of course done with the config map resource here. And so what we're going to do now is just kind of explore an existing cluster that I've already created using a EKS CTL, which is an open source utility that allows you to provision a cluster on Amazon elastic Kubernetes service. However, you can use other types of Kubernetes distributions as well or other cloud vendors. There's a lot of different cloud vendors out there that support Kubernetes as a managed service. So if you actually go out and just search for the autoscaler component, this is, I've got a separate training on this, but the cluster autoscaler is basically a mechanism that allows you to horizontally scale your cluster by adding more nodes to it. But what's nice about this project is that they actually have documentation that describes some of the different cloud vendors that are out there that provide managed Kubernetes services. So if you go under the autoscaler and then take a look at deployment down here, they actually have a list of supported cloud providers. And so these are a bunch of different cloud providers that expose managed Kubernetes service. A lot of the popular ones of course include like Google Kubernetes Engine or Amazon Web Services, the EKS. And we've also got Microsoft Azure, Azure Kubernetes service or AKS for short. And then there's a lot of smaller cloud vendors out there that are more specialized but provide a better user experience. And some of those are like digital ocean, they've got a really nice user interface, their pricing is much more competitive than some of the major cloud vendors. And their APIs are just a lot easier to work with as well. Same thing for line node as well as Vulture. Those are just really more niche focused cloud vendors. And they provide just an overall better user experience. But I'm just going to be using Amazon EKS here. And the utility that I'm using to provision my cluster here is of course EKS CTL. And this is actually developed by a company called Weaveworks that has some plugins for Kubernetes called like WeaveNet, for example, or WeaveScope. And they provide this open source utility EKS CTL specifically for the purpose of simplifying deployment of Kubernetes clusters on Amazon EKS specifically. So you would not you want to use this utility if you're planning to use Google Kubernetes Engine or Azure Kubernetes service or line node, digital ocean, etc. But this just makes it really easy to create a cluster with a couple of simple commands. But I've already got a cluster up and running here. And I've got the kubectl command line utility attached to it. So if I do kubectl version, you can see that I am running Amazon Elastic Kubernetes EKS version 1.21.5. So EKS is a couple of minor versions behind on Kubernetes. The current version of Kubernetes is actually 1.23. And that's the client version that I am using right here. But I'm actually talking to a slightly older cluster version, which is the latest version that's available for EKS specifically. Alright, so once you've got your cluster up and running, and you've got your kubectl command line utility attached to that cluster, let's start to explore what core DNS looks like. And as we discussed before, the core DNS components are going to be provisioned into the kube system namespace here. So when we are looking for these different objects, like the config map for core DNS, and the deployment controller for core DNS, the replica set, and the actual pods themselves that are actually running the core DNS application, the binary, we are going to want to be exploring the kube system namespace here. And we can specify the dash dash namespace parameter on kubectl in order to specify that we want to explore resources that are in this namespace right here. So let's go ahead and run kubectl dash dash namespace. And we'll say kube dash system. Actually, let's do this first. Let's do kubectl get namespace. You can also do NS for short, if you don't want to type out namespace, but I like to type out the full resource name just as kind of a mental exercise to ensure that I kind of know what I'm doing. So NS is short for namespace if you'd like. But now what we can see is that we've got our kubectl node lease kubectl public kube system. And of course, the default namespace where you can provision, you know, whatever applications that you'd like to in there. But kube system is kind of a, I don't want to say as necessarily a special namespace, but it's a namespace that often contains the system components for kube internetes. So if we say kubectl get all, and we'll add on the namespace kube dash system parameter here, this is going to show us all the different resources or at least most of the core types of resources that are in that namespace. And as you can see right here, we've got the deployment controller called core DNS. And the deployment controller has been configured to request two replicas of core DNS itself. So right here, you can see it's requested to and there's currently two available. Also remember that the deployment controller itself does not actually create the pods directly. But the core DNS deployment will create a replica set. And that replica set is then what actually creates the pod. So you want to be also looking at this replica set resource right down here. And the name of the replica set should match up with the deployment name here. And then after the deployment name, it'll actually have kind of a randomized string here, kind of just a partial hash value. And that is going to serve as just a unique identifier for that particular replica set. And if we were to make any changes to the core DNS deployment, and then roll out those changes to it, then the deployment controller would actually create an entirely new replica set resource. And then it would spin down the two pods that were requested from this first replica set. And then it would create a new replica set that requests two pods, and it would kind of slowly switch from the old replica set to the new replica set. And then if we take a look right up here, these are the actual pods that belong to core DNS. And you can see sure enough, there are two pods because that is what the replica set requested based on the deployment controller configuration. And then we have two in the ready state. So that indicates that core DNS is up and running, they're in the ready state, and everything looks good here. So we can see that these pods are in the running state. And one out of one containers is running. Keep in mind that Kubernetes pods do support more than one container. So containers and pods are not, do not have a one to one relationship, but pod can actually have more than one container. However, oftentimes, you'll see a pod just have a single container in certain use cases. But for when you're building out enterprise applications, and you've got complex application architectures, you'll often see situations where an application designer will define a pod as having more than one pod more than one container inside of that pod definition. But as far as core DNS specifically is concerned, each pod for core DNS only has one container. And that container runs just core DNS, the binary. So if we wanted to kind of dive a bit deeper into this, we could grab the name of one of these pods, and then we could run the describe command. And when you run kubectl describe, you're basically telling the Kubernetes API server that you want to get detailed information about a particular resource. And you'll see, I get this message saying that this core DNS pod is not found. And that is because I am not currently in the cube system namespace. So I need to specify that parameter again. And so this is going to show us all the details for core DNS, or at least one of the two pods that we are running. So if we scroll to the top here, you can see we've got the pod name, we can see the namespace the pod belongs to. And there's a lot of other properties here that tell us things like which node in our cluster, the this current pod with this name is currently running on it's so it's been scheduled on this node right here, it's just kind of got a randomized name based on the IP address, the private IP address that it has, you can also see the labels that the pod has, you can see when the pod was actually started, you can see the annotations for the pod right here as well, you can see the the overall status of the pod, which we could see when we ran the get command, in addition to describe, you can see the IP address on the cluster network that the pod was assigned. And then right down here, you can see that this pod is actually being controlled by a replica set. And so it tells me the name of the replica set that this particular pod actually belongs to. And the reason that this is important is because pods can actually be spun up on a Kubernetes cluster as a standalone resource, you don't have to have a replica set in order to create a pod. But when you're creating resilient applications that automatically restart in case of failures, then you are going to at least have a replica set and possibly even a deployment controller on top of that replica set that's actually controlling the replica set itself. So when you see this controlled by property on a pod, it's basically telling you if and specifically which resource a pod is being managed by. Now right down here under the containers section, this is what shows us the actual configuration of the containers in the core DNS pod. And again, we're just looking at one of the two pods that were spun up. But this pod has a single container and that container has an internal name of core DNS. And we can also see the ID of that container on the Dr. Damon running on the node itself. And then right down here, we can see the image that's being used to deploy core DNS. And what you're going to notice here is that the image on at least on Amazon elastic Kubernetes service, the image that's being used for core DNS itself is actually a special EKS managed build of core DNS. This is not just a generic core DNS image that you might find on Docker hub, but AWS themselves have actually implemented their own custom build of core DNS for their managed EKS service. So that's something that you'll want to kind of pay attention to if you're using other cloud vendors as well. It's totally possible that other cloud vendors are using just the generic core DNS image. But of course, AWS likes to control things themselves and inject customizations into a lot of different open source applications. And so it is interesting to note that they're using their own custom image here. You also see that we have the ports here. So these are the ports that are being exposed from this core DNS container in the core DNS pod that we're currently looking at. So core DNS has ports 53 on UDP Unigram data protocol, and we also have port 53 TCP transmission control protocol. So we have kind of a connectionless or stateless port here. And we also have a state full TCP port exposed. And then we also have port 9153 here. And I believe port 9153 I'd have to look at the actual project stocks themselves, but I'm pretty sure this port here is actually used to monitor metrics for a Prometheus endpoint for core DNS. Now the other thing you're going to see here are the arguments that were passed into core DNS when it's created. So when this container named core DNS actually spins up, it's going to run core DNS. And then any arguments that core DNS is going to be launched with are specified right here. So you can see that it's actually passing in a dash conf argument into the core DNS binary. And then it's passing in a file system path, which is slash Etsy slash core DNS slash core file. And that is the path on the pods file system that the configuration file for core DNS is actually going to live at. And you'll see that this file is actually being mounted from a config map as we talked about in this diagram right over here. So there's a core DNS config map, and that gets mounted into the pod. And then the core DNS container inside of the pod actually mounts the config map as a file on its local file system. And that is how core DNS, the binary that's running inside the container is able to access the core file from the actual config map itself. So if we take a look at some of the other properties here, you can see that on EKS, at least the default memory limit is 170 meg core DNS is a very lightweight binary. And so 170 megs of memory will be enough to satisfy, you know, very a very large number of requests. If you wanted to, you could actually set up some benchmarking utilities to kind of, you know, test the performance of core DNS on your Kubernetes cluster, but we'll leave that exercise for another time. However, what you will see under the requests section is that it has a minimum request of 100 milishares of a CPU. So that's one tenth of a CPU. And then we've also got under memory here, it's requesting at least 70 megabytes of memory. So if a node does not have this many milishares of CPU or this amount of memory, then this pod would not even get scheduled onto that node in the first place. You can also see that core DNS has a liveness probe here. And so this is basically just hitting an internal endpoint using an HTTP get based probe. And what it's doing is it's just hitting a web server on port 8080 internally under the slash health endpoint. And that is going to run every 10 seconds. And so if that liveness probe fails, then after five times, so basically over 10 seconds and five failures, that would take about 50 seconds. And if for some reason core DNS were to go offline and this health endpoint indicated that core DNS was unhealthy, then the pod would be or the container would be killed and restarted within the pod to attempt to bring the application back online. Right down here under the mounts, this is where you're going to see that the core DNS folder under Etsy, so slash Etsy slash core DNS is actually being mounted in from what's known as a config volume. And that right there is how the config map gets mounted into the pod, and ultimately into the containers specific file system. And then down here under volumes, so this is the pod level configuration for volumes, these mounts right here are actually the container level mounts from the pod. But then down here under volumes, this is the actual mounts into the pod. And you can see that there is a volume called config volume. And that is the volume that's referenced right here in the actual container mount. So the container is mounting config volume from the pod. But the pod is mounting config volume from a config map. And the config maps name is core DNS. So if we used kube cuddle to find config map resources in the kube system namespace. And if we specifically look for one with the name core DNS, then that should show us the core file, which is the configuration file for core DNS on our cluster here. Alright, so down here you can see the events and you can just see that core DNS was started up just fine. And so it looks to be healthy right here. So let's go ahead and take a look at the config map. And we'll do that in the next video. I hope this has been informative for you. And I'd like to thank you for viewing. Hi guys, and welcome back. Let's go ahead and explore the core DNS configuration by taking a look at the core DNS config map here in our Kubernetes cluster and seeing how it gets mounted into these core DNS pods. So we'll head over to our terminal here. And rather than looking for pods or deployments or replica sets, what we're going to look for is the config map resource. Now you probably remember this command from the last video where we did a kube cuddle get all. And when we run this command, if you take a look at the output, you can see that we get back a bunch of core types of resources in Kubernetes. So common things like pods or service controllers or demon sets or deployment controllers and replica sets. However, what you can see here is that we don't have any config maps. And that's because the kube cuddle get all command, even though the name says get all, it doesn't actually get all types of resources. So if we were to run kube cuddle dash API resources, this will show us all of the different supported resource types on the server, albeit they are not currently sorted by alphabetical name here. But if you scroll up a little bit and kind of look at the top of the result here, you can see that we do have a resource in the Kubernetes cluster called a config map. And this is just core to Kubernetes of pretty much any distribution. It's basically just a core resource type that allows you to define key value pairs in a config map, and then mount that into a pod, and then ultimately mount it into the containers within a pod as well. Now when you're dealing with config map resources, you can also use the CM as an abbreviation for the config map resource type. So you don't actually have to type out config maps. You can also specify just CM as well. You also see that it belongs to the core API group here, which is just v one. It doesn't actually say core v one, but it is kind of implied that the name is core v one. And over here under the namespace column, you can see it says true for config maps here. And what that means is that the config map resource, when you create one on your Kubernetes cluster, it must belong to a particular namespace. There are certain types of resources within Kubernetes that do not belong to a namespace, even though most of the core constructs that you're familiar with like pods and deployments and replica sets and services and things like that actually are tied to a specific namespace, but config maps are also tied to a specific namespace. But some of the things that are not tied to a specific namespace are things like namespaces themselves, right? It really wouldn't make sense to create a namespace within a namespace, unless maybe some point in the future Kubernetes might support, you know, nested namespaces. I'm not sure if that's even something that's on their radar at the moment. But in theory, I guess you could do something like that. But for the time being, namespaces are not a namespaced resource, they are root level resources that apply to the entire cluster. The same thing is true for nodes here. So any node that has been joined as a worker into the cluster is a root level resource that does not belong to any specific namespace within Kubernetes. The same thing is true for things like persistent volumes as well, and a variety of other types of resources. But let's go ahead and skip over those. And we'll just focus on config maps for now. So what we're going to do is instead of saying get all, we'll do kubectl namespace kubesystem. And then we'll say get cm, or you can say config map. And that's going to retrieve the config maps only from the namespace that we specified right here. And as you can see, there are a whole bunch of them right here, we've got ones related to the kube proxy, the Kubernetes root CA public key certificate, we've got the VPC resource controller, which is something that's specific to Amazon web services. And you can also see the number of data points that exists within each of these config maps right here. So this config map named extension API server authentication actually has six different key value pairs in it, whereas some of these other ones like the core DNS, for example, only have one key value pair. So we could do a kubectl and then namespace kubesystem and say describe. So when we call get, that's just getting the high level resources without any details about them. But if we do describe config map, we can either just run this by itself and that's going to describe all of the config maps on the kube system namespace, or we could specify the name of a specific config map that we want to get. So let's do a describe on core DNS. Now this is going to show us the entire configuration of the core DNS config map. And as you can see right down here, it's got a single key called the core file. And then the core file key contains a value right down here of the actual contents of our core file. So this is the default configuration, at least for Amazon Elastic Kubernetes service, of course, it's totally possible that other cloud vendors have a slightly different default configuration here, depending on what kind of customizations they might be applying. But what you're going to see here is that there is a listener here on port 53, which is your standard DNS port. And this is what's going to basically define the kind of root level server here. Now core DNS can listen on other ports as well, you could certainly change from the default port to listen on an entirely different port. But in this particular case, the default configuration is just going to use DNS standards to listen on UDP, and I think even TCP 53 in this case. Now the dot that you see right here might be a little bit confusing, it's not really obvious what that stands for. However, this is what you can think of as a DNS zone. So if you have a website to like my cool website dot dev or, you know, github.com or something like that, you can actually specify the name of the DNS zone that you want to have the core DNS server respond to and then anything that is inside of these curly braces right here is going to be configuration for that specific zone on the listener that on the listener port number that is specified right here. So everything inside the braces here, the curly braces is going to be part of the configuration for any zone. So the period is indicative of any zone, not a specific domain, but it will literally respond to any incoming request into the core DNS server. So if I were to make a request for Trevor Sullivan net or CBT nuggets.com, or github.com, or pretty much any website out there, then the configuration that belongs to this kind of catch all zone here is going to apply to that request. Now, there's a bunch of different options that you're going to see down here. And one of them is actually a plugin. So core DNS is a plugin based, they call it a plugin chaining based DNS server. And that's because core DNS is extensible. You can use lots of different plugins to manipulate the responses that are sent from core DNS back to the clients, the pods that are actually making queries against the core DNS service. And one of those plugins right here that is provided by default to allow you to resolve internal cluster service names is called the Kubernetes plugin. But if we were to head over to the website for core DNS here, you'll see that there are plugins that are kind of in tree. So it's basically part of the core DNS project. But then you've also got these external plugins here as well. And if you would like to install or enable an external plugin, you're certainly more than welcome to do that. However, you'll notice that the plugins that are provided as external plugins are not actually supported by the core DNS team. However, the plugins that are included in tree for core DNS, these are part of the actual core DNS project. And in theory, supported at least to the best of their ability, as far as open source projects go by the core DNS team. Now what you're going to see here under this list of in tree plugins is their alphabet alphabetically sorted here. So you can kind of look through the list and search for certain key terms. If you're using Microsoft Azure DNS, you could look for the Azure plugin, for example, if you're looking for configuring the by local bindings for core DNS, you could use the bind plugin as well. If you want to enable DNS caching, there's actually a caching plugin right here. There's also support for Google cloud cloud DNS. And then there's also support for things like DNS sec. If you want to do zone based signing, you can actually enable the DNS sec plugin in core DNS to support that capability to improve the security of your DNS server. Something else that I find actually really interesting here is this erratic plugin. And what's cool about this erratic plugin is that they actually intended this plugin for kind of testing unexpected network behaviors when clients are making queries. So if you have like dropped packets, or if you have delays, you know, basically latency between the request and the response, you can actually introduce kind of arbitrary network failures and basically determine how your client application is going to respond in the event that packets are dropped or if there's delays in the responses that are coming back from the DNS server, you can configure that using this erratic plugin. Now, something else that's interesting about the configuration of core DNS is that not only do you have this mechanism of plugin chaining where you can kind of feed a request through these different plugins, but if you want to have her a single request actually be handled by multiple plugins, then you can use a feature known as the fall through feature. And that allows you to basically say, well, I want, you know, maybe the Azure plugin here to address a request. However, I want that request to not be returned directly to the client. I actually want it to fall through to another plugin. So you could say, well, maybe I want another plugin in my plugin chain that maybe forwards a DNS request to a different DNS server. And so you could actually chain these plugins together and use that fall through feature to allow the requests to be processed by multiple plugins rather than just being responded to by the first plugin that returns a valid response. So core DNS is extremely flexible. These plugins provide you a lot of different functionality in terms of how you forward requests to other DNS servers in terms of how you're actually responding to them. And of course, because it's plugin based, you can actually write your own plugins as well. And they've got a nice blog post here that kind of describes how you can create your own plugin and then chain your DNS requests to those different plugins. So what we're seeing here in the default configuration is that we're actually using this Kubernetes plugin here. Let's maybe find that over here in the list of plugins. So here's we've got ks k8 s external. So this one would allow you to resolve load balancers and external IP addresses from outside of a cluster. But then there's the Kubernetes plugin here. And this is the one that allows you to read zone data directly from a Kubernetes cluster. And so essentially what's happening is when we make a request from a pod over into the core DNS server that's running as part of our cluster, then what's going to happen is Kubernetes itself is actually going to respond to certain types of queries that are going to cluster dot local. So if the request goes to something dot cluster dot local, then that is going to be processed by the Kubernetes plugin here. However, if it's not addressed by the Kubernetes plugin, then you're going to see that we have a for a default forwarder here. And by default, any requests will be forwarded over to a DNS server that can be found in the pods local etsy resolve dot conf file. And so that will help to ensure that any requests that are made to internal services are going to go to Kubernetes directly, but anything that does not match this plugin will then get forwarded externally from the cluster out to a root DNS server on the internet. And that will help to ensure that any requests for external resources like, you know, github.com, for example, are going to be processed as you would normally expect as if you were hitting it maybe from a local web browser, for example, there's a bunch of other options in here as well. So there's caching being enabled here. There's also configuration reloading enabled in here, there's load balancing enabled. So there's a whole bunch of features. And you're also going to see this Prometheus endpoint here. So if you want to gather metrics about your core DNS deployment on Kubernetes, then this Prometheus endpoint is going to help you gather that information so that you can kind of monitor the health and performance of your core DNS deployment. So this is the config map for core DNS, that's ultimately getting mounted into the core DNS pods, and then mounted into the containers file system from the pod volume mount. So that's kind of how you can configure core DNS. And in the next video, we'll actually test out connectivity to core DNS and make sure that we are able to resolve against it. I hope this has been informative for you. And like, and I'd like to thank you for viewing. Hi guys, and welcome back. Now that we've kind of explored how the core DNS component is set up on our Kubernetes cluster up in Amazon elastic Kubernetes service. And now that we've understood understood how the configuration for core DNS, also known as the core file, has been mounted into the core DNS pods, let's actually test out DNS resolution against core DNS, and play around with it. So I'm going to go ahead and create an interactive container on my Kubernetes cluster. And so we'll run a cube cuddle. And we'll use the run command. And if you take a look at the help for the run command, this, this allows you to actually spin up a pod on your Kubernetes cluster. And you can actually make it interactive so you can see the output from the container on your local system, even though the pods running up in the cluster, and you can also make it interactive. So you can type commands and send them into the remote pod. So what I'm going to do is use the standard in the standard in and the TTY parameters. That's what's going to enable it to be interactive. And then we're going to need to specify a container image that we want to use to spin up an interactive pod as well. And we also need to specify a name for the pod. And once we do that, we should be able to interact with it. Of course, we also need to specify a command that we want to run in the pod. And we could just run a bash shell, for example, in the pod. And that should allow us to, you know, type interactive commands and, you know, basically run whatever we need to inside of there. I'm going to be using an Ubuntu base image for this purpose. So if you head out to hub.docker.com, and you take a look at the Ubuntu image out here, this is just a really nice general purpose image that's good for troubleshooting because it has access to the apt package repositories for Ubuntu. You can install all sorts of utilities on it. And you've got a lot of choices about which version of Ubuntu you want to run on there. And so I'm just going to be using the latest LTS version of Ubuntu. I'm not sure if 22.04 is released yet. I don't think it is. So I'm going to use 20.04. And that'll be the base image that we use to launch an interactive container. Let's run a kubectl. And we'll say run. And then we'll specify our image as Ubuntu. And then I'll say 20.04. And then we'll go ahead and do standard in and tty. And we'll do a name for the pod. So I'm going to call this interactive pod. You can name that whatever you want to. And then I'm just going to put a double dash here. And then after the double dash, we'll specify the command that we want to run. I'll just say bash. And that should create a pod on the remote system. It can take a second because it has to download the Ubuntu container image and spin up a new pod from that container image. And now you can see that we've got this interactive pod here. You can also fire up another terminal. So while that's running, I could go ahead and just SSH into the same server. And I could just run a kubectl yet all from the default namespace. And you should see that we have a pod right here named interactive pod. It's currently in the running state. And so everything looks pretty healthy there. So you could do a kubectl describe on that. And it'll basically show you the configuration of the pod that we just spun up interactively right over in this window right here. So we can basically verify that it is sure enough running on our remote cluster. All right. So inside of this container, inside of this pod that we just spun up using the Ubuntu base image, I'm just going to run an apt update here. And then I'm going to do an apt install on the DNS Utils package. And let's do apt install DNS Utils dash dash yes, that should install the dig utility. And we can use the dig utility to basically just issue DNS queries against our server. Let me just plug in a geography here 12 for us. I'll just plug in one for Alaska because I don't really care about that. But now that I've got DNS Utils installed here, I should be able to run the dig command. And we can use dig to basically query DNS servers. And let's see if we have NS lookup. And we sure enough we have NS lookup as well. And if I was to run NS lookup against github.com, you can see that the server that is responding here is actually 10.100.0.10. So if I run a kube cuddle, get all on the kube system namespace here. And take a look at the results here. So we've got our core DNS pods, of course, that we've already looked at in that namespace, but check it out. We also have this service controller here called kube DNS. And this kube DNS service, which is essentially just an internal load balancer resource, has an IP address on the cluster's internal network. And that IP address is 10.100.0.10. And it is forwarding traffic on TCP and UDP port 53 right here. And so this IP address right here is exactly the same IP address that you see right over here when we run NS lookup. So the server, which is really just a load balancer, a service resource in Kubernetes, is the IP address of that server is basically the same as the service that we have right over here called kube DNS. So this is actually just something that's kind of interesting to note is that the service controller, the load balancer internally for DNS specifically still has the legacy name of kube DNS. However, because core DNS has been switched out in favor or in deprecation of kube DNS instead, the core DNS pods are going to be responding to the actual requests, but the service controller itself still has that old name of kube DNS. I'm not sure if they're ever going to update that to coincide with the new core DNS. But if you are looking for the DNS load balancer, it is going to be named kube DNS internally to your Kubernetes cluster. So that's just something that you'll kind of want to be aware of as you're administering a Kubernetes cluster. So we can see that these DNS requests are actually being correctly forwarded from core DNS out to the external servers, DNS servers rather, that are configured on our cluster nodes. So something else that we could do is actually attempt to resolve an internal service here as well. So if I take a look at my default namespace, so let's do get all on the default namespace here, you can see that I've got a couple of service controllers or load balancers here as well. One of them just has an internal IP address of cluster IP. The other one is actually being exposed on a node port as well, but it also obviously has a cluster IP internal for internal use as well, if there's pods that want to hit it. And so let's say that we wanted to discover the service called my CBT Nuggets service. Well, what we can do is actually run NS lookup against that name, and we'll be able to resolve it by using just that name. So let's do an NS lookup. And I'm just going to paste in my CBT Nuggets service. I just realized that I was actually running NS lookup in my interactive NS lookup shell here. So the reason that I was having that failure is actually because I was running NS lookup inside of this interactive NS lookup command right here. And so I was failing to look up NS lookup, which obviously is not going to resolve on my cluster here. So if all I can do here is type in my CBT Nuggets service. And as you can see, it automatically adds in the cluster suffix here. So it's going to plug in the namespace that the service belongs to. So this service controller belongs to the default namespace, as we can see in the output of kubectl get all on the default namespace right over here on default, you can see that that service resource belongs to that namespace. And so then after that, it's going to add on a suffix of dot SVC dot cluster dot local. So anytime that you are resolving a service or a pod inside of your cluster, the suffix is going to be the namespace name after the service name, of course, and then you've got just dot SVC. That's just a literal kind of referring to the service resource type. And then it's going to also require you to specify the cluster's DNS name, which in this case is just dot cluster dot local. And so now you can see that we get back the address of 10.100 dot 191.3. And if you take a look at the IP address of this service controller right over here, sure enough, that does match up. Now at this point, what you could do is in order to test out, you know, that we are truly resolving against core DNS right here, we could actually tear down core DNS entirely on our cluster and effectively break it and then see what happens from a client container that's attempting to resolve these different DNS names. So one of the things that we can do to kind of intentionally break core DNS is to go back into the CUBE system namespace here, as they get all. And then what we're going to do is look for this deployment controller here called core DNS. And what we can actually do is scale this deployment controller from two replicas all the way down to zero replicas. And effectively that will terminate the pods that belong to the replica set, it'll it'll tear down these two pods right here. And then we won't have core DNS running internally at all. And then when a query is made against the CUBE DNS service here 10 100 zero dot 10, then basically there's nowhere for that service controller to actually forward traffic to because these pods would have then been torn down. So if we do CUBE cuddle, and then scale, and specify deploy slash core DNS, that is our deployment controller right here. And we'll set the replicas parameter to zero. And that's going to be in the namespace cube dash system. And if we do that, now we've scaled it. So we'll do a get all again. And now you can see that the deployment has been scaled to zero, our replica set in turn has also been scaled to zero. And both of the pods that were previously running core DNS are now in the terminating state. So now at this point, if I run this command again, they should have disappeared by now. So now you can see our pods in the CUBE system namespace that we're running core DNS are now both destroyed. So now the service still remains. But the service does not have any endpoints to send traffic to and we can actually confirm that by doing a describe on the service, I'll do CUBE cuddle dash namespace cube system. And then we'll say describe on this service. And if we take a look at the endpoints property right here, the endpoints is now set to none. I don't think we ran this command earlier. But if we had, you would have seen both pods as being configured as endpoints based off of the target labels under the services selector right here. So the service is basically looking for any pods that have a label of k8s dash app equals CUBE DNS. And it will send traffic to those as endpoints down here. However, we've destroyed the core DNS pods. And so now there are no pods that match this selector. So if I come back to my client pod here running the Ubuntu image, and I just hit up arrow and try to run this DNS query again. Well, guess what's going to happen. It's going to try to send the query over to the service controller's IP address. But the service controller has nowhere to send that traffic. And the query will simply time out because no servers could be reached. But what we can do to resolve that is to simply scale our deployment controller back up to two or three or four replicas. Let's just do back to two. That's going to reconfigure the replica set as well as spin up the pods here. And you can see that in a matter of seconds, in fact, even milliseconds possibly, these core DNS pods get back up into the running state. So now our service controller should have somewhere to send traffic to. If I do a kubectl describe on the service controller again, you can see that now the endpoints property here for both the UDP as well as the TCP target ports have now been configured to forward traffic to the pods that match this selector right up here. So now if I come back to this pod on the client side and try to run another query internally against the cluster, you can see that that now resolves again. I hope this has been informative for you and I'd like to thank you for viewing.