 Part of AK's learning series, do I have it on okay, that is not a good start, clicker is not working, let us go. So that is about me, my name is Nilesh and that is my website, GitHub account, Twitter, LinkedIn, in case you want to connect on any of these social media platforms, I am okay. What did we do so far? So we had three parts of this series, part one was getting started with Docker, part two was using Docker compose to stitch the multiple containers and the last part was doing container orchestration using Kubernetes on a single node mini-cube cluster. So in this session, we are going to deploy the same application into a multi-node managed Kubernetes cluster on Azure, so that is the Azure Kubernetes service AKS and the next two parts would be how do we debug and monitor these application that we deploy in the cluster and the last one would be doing continuous integration and continuous deployment, which is more like a bonus. So if you are not here, this is the application that we have built right from the scratch, we have a simple tech talks kind of application which allows you to create talks, it is front end with ASP.NET Core, MBC, there is a backend API with .NET Core and there is a SQL Server 2017 on Linux, all these running inside the containers. So that is part one, what we did using Docker. So I put the links here to the previous recordings in case you want to have a look at that. Part two as I said was Docker compose, where we composed all these multiple containers into single Docker compose file and in part three, we did the basic part of understanding what are the different Kubernetes objects like namespace, service deployment, scale sets or stateful sets. So let us get started with today's session, which is about the Kubernetes service, Azure Kubernetes service. So what is Kubernetes? Kubernetes is a cluster manager, it is for deploying your applications into the cluster, it is not like a hardware managing application, it manages your containerized workflows or workloads on the multi node cluster. So this is like a very high level architecture of Kubernetes, it has master and it has a set of worker nodes. What we get with AKS is access only to the worker nodes, we do not get access to the master. So this is managed purely by Microsoft and we can work only with the worker nodes. So when you start working with Kubernetes or when you want to deploy your application to the AKS cluster, this is the usual workflow we have in development. So we use Docker desktop, either Docker for Mac or Docker for Windows. We build those images for the application, we package the application inside containers and then we test it locally using a local Dev Kubernetes cluster. So last time I showed you a mini-cube version, which was like the old way of testing containers on a single node cluster. Now with the recent version of Docker for desktop, you have support for Kubernetes built in. So with just your settings, you can enable single node Kubernetes cluster. So let me show that to you, while the Docker starts, let us continue on this. And once you are done with those testing locally, you can push the images to a container registry. It could be Docker Hub, a public container registry or you can have your own private container registry like Azure Container Registry or if you are using any other cloud, they have their own private container registry which you can make use of. And then you deploy it to the cluster, which is a Kubernetes cluster. So we will see this, the second part of the workflow today. We have already done the previous things in last three sessions. So let us have a look at how do we provision the Kubernetes cluster. So there are multiple ways. One is obviously you can go to the portal, log into portal.azure.com, go to create resource and here you can search for AKS and you will find Kubernetes service. Say create and provide all the options that are required like which subscription you want to use for creating this cluster. In my case, I have got multiple subscription, visual studio enterprise or Azure sponsorship and then you have the usual stuff like the resource group, your credentials, what should be the name, which region you want to create provision this cluster, how many nodes do you need, which version of Kubernetes you want to install. So there are like about 10 different versions supported here for more than 10. Based on your needs, you can select the version and you select also the number of nodes, how many worker nodes you want to create. You can also enable the authentication then go into the networking. This is like doing it using the user interface, graphical user interface, but my way of doing provisioning is using a PowerShell script. So I use this script to provision the cluster so it is basically the same information that I put inside PowerShell and then I use the Azure CLI. So I set the subscription using az account then I create the resource group and this is all parameterized. So although I am passing some default values here, we can overwrite that when we invoke the script. So I have already ran this before the session and this is the output of running that PowerShell script. So when I run, you will see output like this which starts with creating that cluster or resource group, then the AKS cluster and you can see all the details. I created a three node cluster, what is the DNS prefix, what is the subscription and all other details of that particular cluster. This takes about 10 to 15 minutes to provision all this. So with this one command like az aks and create, I can create the whole cluster. So basically this is the final command which is executed with all the parameters. And what you get at the end of this is these 18 different resources created for you. So has anyone worked on creating like a Linux machines? How many of you work with Linux? By the way, yeah, not the Windows version, one with the GUI. Without Windows using SSH, doing like password based login. So you know how it is like, it's like a complete black screen. You don't have any support, you are just on command line, just commands. I've been doing that for the past two weeks trying to set up some other cluster and like you can imagine my state. So here what you get with one command is all these resources provisioned for you. All the communication established between those resources. So if you go back, you see that we have whole of this setup. So you have the master, you have the Asian nodes. There is the SSH less communication established between them. All the kind of resources that are required for them to communicate like the Kubernetes services which are required. Everything is provisioned and ready for us to use. One master and three working nodes. So what I provisioned is three working nodes. Master in this case is managed completely by Azure. Yes. So once we have all these resources, then how do we deploy our application? We saw last time, again I was using a script or I was using the kubectl commands, command line interface. I made couple of changes here, minor ones. Because in the mini cube version, I was using a single node cluster. Whereas in a case, we are using a full-fledged multi node cluster. In the single node version, I was using the service type as node port. And if any one of you was there in that session, I did tell you that we'll be using load balancer. And we will expose these services publicly outside. So from outside the cluster, we would be able to access these resources. So I've changed couple of places where the service type has changed from node port to load balancer. One is obviously the web service that we are exposing for the user interface. So under AKS, I've created a similar structure. And if you look at the service here, this is the API. I don't want to expose API publicly. So I'm keeping this as node port itself. That means I will not be able to access it from outside the cluster. This is internally accessible for the services that are deployed within the Kubernetes cluster itself. Nobody can access it from outside. Then for the database, I changed from node port to load balancer. So I can use any of the database connectivity tools to connect to this particular database, which we'll see later. And for the web, I did the same thing. So here, in the specification of the web, I changed node port to load balancer. So once I do this, when the application is deployed, so I use the similar deploy tech talk script which I used last time. This is what happens. So it creates the namespace. It creates, I'll talk about the Azure disk a little bit later, it deploys the database, it deploys the API, and it deploys the web front end. And once the deployment is complete, we can go and browse the control plane. So we'll see the status of the cluster. Again, I need to select the right namespace here, what I'm creating as part of this deployment. So I created a namespace called AKS Part 4, and it contains all these. So we have two deployments, three pods running, two replica sets, one stateful set, three services, and you can see the public endpoint here. This is the public IP which is provisioned when I say the service type is at load balancer, and then I can go and browse this. So let's create a new talk, let's say it's a free conference level as advanced. Create, now if I want to see this data in the database, I can connect to the endpoint which is exposed for database. I can use SQL operation studio here to connect to that. Let's run a select query, and you can see the four records are already stored in the database here. So that was about the load balancer. The last change which I did was about the data persistence. So in the mini cube version, we did not handle this part, but we could externalize the data. If you remember in the second part when we were doing Docker compose, every time I restarted the application, the data was lost. I had to recreate the database creation script. So if we don't put the data outside the container, when the container is lost, when we recreate the data is lost. So to address that issue on a single node cluster, what we can do is we can map a local folder into the container and that data will be stored locally. But in a multi node cluster, if we store it on a local single machine, next time when the container starts, let's say you upgrade your application and your container gets scheduled on a different node altogether, it will not find that data and you will not be able to reuse the data. So in that kind of scenario, what we do is we use a concept called persistent volumes in Kubernetes. So this allows you to externalize your data and really store it outside of the Kubernetes cluster. So in this example, let's talk first about without a persistent volume what happens. So let's say I've got a three node cluster, I've deployed the application and each of my container gets scheduled on one node. Something goes wrong and your third node is lost. So Kubernetes will automatically reschedule that SQL Server container on some other node, but your data here will be lost. Now if you have a persistent volume, what would happen is your volume would sit outside of the Kubernetes cluster and if your node goes down, you still have the volume. You can reattach that volume and your data would be attached back to your container. Yes, when the container starts, you need to put that logic in the container saying, I'll show you in the configuration of the container how we do that. First see how we provision this volume, the external volume. So we use something called as a storage class. And storage class is like the type of external storage we want to provision. And there are more than 15 or 20 storage classes that are supported. Here are the ones. So we have AWS support, you have Azure, some external ones like Flocker. You have the NFS ones. So all these external storage types are supported by Kubernetes by default. So you can choose any one of these and you can say I want my external storage to be one of these providers. Once you have the storage class defined, you need to create something called as a persistent volume claim. So here, let's just look at the storage class. We are providing some metadata like name as Azure disk, provisioner as the Azure disk, Kubernetes IO. And then when I create a persistent volume claim, here we link it. So we are saying we are going to use Azure disk. What is the name of my persistent volume claim? And in the database section, then I need to say, I want to mount this as a volume. Sorry, it should be this, it's commented but it should not happen. So what I'm saying is, I want to mount a volume, which is in this location inside the container. But that source would be the Tech Talks data, which I provision using this persistent volume claim. This X should not be there. So that way, even if something goes wrong in the cluster, we still have our data outside of the cluster. So we talked about storage classes, different types of external storage that we can attach to our cluster and the persistent volume claim, which is a way for the container to say, I know there is a particular type of storage created or available within the cluster. I want a part of it or let's say there is one gigabyte provision externally. I want to stake claim for that one gigabyte. That's the persistent volume claim. So I think when I deployed the application, the persistent volume claim did not get created. Let's check. Oh, it is. Somehow luckily it got created. So this is the persistent volume I requested for 1GB and we can see the storage class as Azure Disk here. And if I go back to my resources here, I can see the same created as part of the external disk. Here, 1GB. So what we did in this case was via the Kubernetes cluster, we provisioned Azure Disk and then that disk was available to my database container to store its data. While we get all these details from the UI, we can also get it from the command line as well. So here are some commands we can run on the command line to get the details of Kubernetes cluster. So let's find out how many nodes we have. So we have got three nodes here, sorry, pods. I can also get the nodes. So based on my context, it will fetch me all the available nodes. Then I can also go and find out about a specific service. This is what we would also get from the portal. But if you are like a guy who likes to work on command line, you can get the same details from command line as well. And sometimes this information is much more in detail and in one single place as compared to looking at multiple places within the Kubernetes plane. So if you look at here, you might not find all the information in the same place. So you might have to move around a bit to find the same information. It's not readily available in one place whereas you can find that same thing using your command line. So we looked at these provisioning of the cluster. We looked at different ways in which we can query the state of the cluster. We looked at the Azure provision disk. We also looked at how to query the data from SQL Studio or SQL Operations Studio. So that's all I had for the demo. These are some of the links. This is the slide decks from the previous sessions. I'll post today's slide deck as well in the same location. And here are the videos of our previous three parts. So thank you. Any questions? For the continuum, you are mapping the position volumes. Yes. But can we like put some quota concept like for each node concept? Like for this node, I want each one of that 10 GB volume. I want to allocate more one should be this much. One GB, no two GB, two GB like that. The lifecycle of your container and the disks, right? Or the volumes, they are slightly different. They follow their own lifecycle. You might be able to do the quota part using persistent volume claim. So your disk might be having like 100 GB. But you can put a restriction saying this particular claim can only claim up to 10 GB. This particular node can claim up to 10 GB. Node one, node two. The way you connect the node and the disk is using the type. So in this case, I was using read once only or something which is like you attach that particular disk to only one node one time. Whereas there are other types where you can share that particular disk with multiple nodes. So I'm not exactly sure if you can put the quota, but you can share the disk with multiple containers and multiple nodes. Any other question? Yeah, in your console you are still accessing Kubernetes 50 with a loop back address. And then in that you added AKS cluster IP something like. That's the browse part. So if you go back, sorry. This is how I connect IRAN again a PowerShell script which says browse. And when you use this browse command, it takes the context of your cluster name and the resource group name. And that's how it connects to the Kubernetes master in Azure. How you're getting that in your management console? How do I get that? So when the cluster provisioning finishes, right? So if you look here at the end, it would have added it to my kube config. So this is done as part of your provisioning of the cluster. The last step is automatically it adds the public key to access this particular cluster in your dot kube config file. Any other question? Thank you.