 As you know, this is in Azure, they keep changing their names. So this was the old name, but they're previewing it recently. And we also look at the open source solution that is permitted. So in case you don't want to pay for about seven different Azure, and you want to use a fully open source solution, but also have a look at the previous as the other option. So this is what the application has been so far. It's a simple to do this kind of application, which is like a series of tech talks and it's going to be used as a front end or as back in the API and SQL server 2017. And all this there are containers. So everything is categorized in the first part. We mainly focused on how to categorize the application, how to shift the code as part of container images. The second one was about sticking all this together using Docker Compost. This one is kind of an outdated approach nowadays. There are much more better tools. This is good to start with. And recently I came across a tool called scaffolding. I also talked about it on my blog. So that gives you almost like continuous deployment while you're still coding. You don't have to go out of your ID while you're still making the code changes. It can pick up those changes. It can build the images in the background and it can deploy if you have a local, let's say Kubernetes cluster deployed. All this is instantaneous within seconds. It can make all this work. But this was like almost four months back when I didn't know about scaffold. I was using Docker Compost. In the third part, we looked at orchestration using Minikube, which is a single node Kubernetes cluster. So before going full-fledged into a multi-node cluster and into cloud on-premise environment with Kubernetes, this was a way for us to get everything tested in almost a cluster like scenario. And last part was about doing the actual Kubernetes provisioning on Azure, getting the Kubernetes cluster provision right from the beginning, then deploying the complete application using state-full processing with the help of a concept called persistent volume and persistent volume plates. And we also connected to the data source using Operation Studio. So let's start with today's session and we'll be talking about how do we develop multi-container apps when you have multiple containers running? What are the different approaches we can take? So I'll start first with how do we develop the local containers itself if you're starting and you're having some application to containerize the image, you build the image, you're about to package and deploy that application. How do you test it locally? Then we will move on to the cluster solution, which is Kubernetes. And we'll also look at how we can monitor these in the cluster. So let's move on to the demo part. So let's see if we have something running with Docker. I have about an hour ago, there was one image which was started. Let's deploy something which we built as part of Docker Compose initially. So I can do Docker Compose and then give the file name. And I can say up. So the whole stack, all those three containers will be started and they'll have the running application. Okay, so let's go into a new window and let's try the same command again. Look at PS to see what's running. So I've got here four different containers running. I can check one of them. So let's pick the latest one which is SQL and I can look at the, let's say the logs of that with the help of Docker logs. And I can use the name of the container which is SQL one in this case. And this will give me the complete log of what happened inside that particular container when the SQL Server container started. Same way I can find the logs of the other images as well. Let's say you don't find enough information in the logs and go inside the container and find out some more details. So the other helpful command is Docker inspect which gives you a lot more information about what the container is doing. So here if you look at this out there is it visible at the back. So you have the complete information about the running instance of this particular container and you can see the status, what is the image from where it is created, what are the environment variables it's using. So all this kind of information is quite helpful when you want to debug when something goes wrong and you want to find out what the container is doing at the moment so you can use this approach. So some of this I already showed in the first part when we went kind of deep dive into Docker. So I will not spend too much time on standalone Docker because if you want you can go back to the video of the first session and you can find more details here. Let's move on to the second part. Now let's say you've done this individual testing and now the application is deployed in a cluster. How do you go about identifying what is happening in the cluster? So I've already deployed this application on a case and one way I can find out about the state of the cluster is looking at the cuban display. So now I'm in a full-fledged cluster mode where the application is running on cloud. So this gives me the complete health of the cluster so I can find the overview. So what is the workload status? How many demands are running? How many deployments are successful? How many ports are running the replica sets? At the very high level. I can drill down for a specific namespace. So let's say pks part 4 and should refresh. Okay, there's some problem here. So this is usually the demo effect. I've never seen this work very effectively during the demos. I always have to go and recreate this tunnel connection when the demo is happening. Okay, so we can filter the namespace for a particular deployment and then we can see what happened during that particular deployment. So here if you see in the namespace a case part 4, I've got three parts running. I've got two deployments, one for the web, one for the API and for the database we had the stateful set. So you get the big picture looking at this plane. You can go into the details of each one of these. So the logs that we saw for the container can be accessed the same way from the logs option here. So you'll see similar logs shown in the web UI. That's one way. The other is you can go into the details of individual ports and let's say, again, here the logs will take you to the same logs, but you can do a excite, which is like you are now logging inside the container. You are doing a SSH into that particular container and then you can run some of the commands like listing the directories. So let's say you created a file or you are expecting a file to be present and for some reason that file doesn't exist. This is one way how you can go into the container and you can start investigating. Can you see the logs over time? Right from the beginning? Or like, you know, for a week or so. Like let's say you're monitoring it and you see a spike or something. Yeah, you're watching out for loads. Yes. Across the containers, can you see that and can you go into? Not here because this is like the default UI provided by Kubernetes. That's where the next monitoring solution comes in. So that's the next part. So this is using the UI and another good part which I find again useful here is you can have a look at events. Now in this case, there are no events which happen, but for some containers you can find that there are a set of events which took place. So maybe we go to the state full set. Now in this case, there are no events which have taken place, but if there were some events, you would find them here. Usually the API should have the event because I'm using a concept called containers and it creates a set of initial container which populates the initial data and those kind of events should happen, should be shown here, but they're not visible. If it occurred, you would be able to see it in the event section. So the same information what I have on the UI can be queried using the command line. So I can go back to the command line and I can query the status of this cluster. So I can use the kubectl commands here. So here I'm getting all the services and if you see there are multiple services running, but I don't get any response because this works on the basis of namespaces. So if you don't provide a namespace or one of the filter additional parameter, you will get the default service which is just the Kubernetes service running. So if you have provided namespace, then you can give this additional parameter of namespace and you can query the services which are running. And like any other CLI, you will have multiple options. Like in this case, I'm saying, give me all the ports which are running, but also include if they are uninitialized. So you might have a state where the port is getting created and still not fully available to start, but you still want to have a look at that. In this case, I don't have anything uninitialized in the status. Everything is ready and running, but if you had any uninitialized ports, you would be able to see them here. So we looked at this. Next is monitoring the cluster using OMS. Now, looking at individual containers, you might get to know what is happening just at one particular container or one service level. When you deploy this into a bigger cluster, when you have, let's say, multiple replicas of the same service running, you might want to have a complete picture of the cluster as well as how are they performing. You might want to have a look at the CPU usage, the memory, how is the workload running on each and every node. So this kind of information, you will be able to get through a monitoring solution. And by default, that information is not readily available. You might have to dig here and there in the Kubernetes plane, the dashboard. But Azure provides a solution for us, which is called as the Azure Monitor. And this you need to enable when the cluster is provisioned. So if you go to the Azure portal, when I go to my particular AKS cluster, I have the option of enabling this monitoring solution. This is under preview. So if you look at these two, the metrics and insights, these two monitoring solutions, they are currently under preview. So when you click monitor containers, if it is not already enabled, you will have the option to enable it. I've already enabled it for this particular cluster. And this gives you like the overview of cluster. You can find a dashboard. You can do three downs by last like 30 minutes hours. So you have different dimensions, different metrics based on which you can get the state. You can do apart from the cluster at individual nodes visualization of what is the state. I particularly like this feature which allows you to drill down at the node level then it allows you to look at the container. It goes at the very, very detailed level of information. You can do the same thing for controllers and containers as well. And if you are good at query, if there is somebody who likes to do SQL kind of work, there is also the option of log monitoring. So when you choose logs, you have the option to use the log API provided by Microsoft to query the state. So you can build your own queries here and you can get the complex if you wanted to. There are multiple instances of SQL. Some of you will be able to access all of them and actually get the logs. So multiple instances of SQL, in a sense, one SQL, multiple containers. Multiple SQLs in different containers. Yes, you'll be able to access them. So you will have to deploy them as multiple containers and expose them as multiple services. Nice. Okay, so this is quite handy. If you want to stick with Azure and if you want like very minimal effort and to get started, this is one of the most easiest solution I've found so far. If you don't want to spend money, because this you will have to incur the charges. It depends on your retention period, how long you want to retain this log information and how much data is getting stored there. Let's say you have got 100 containers. Every container will be producing logs. Every container would be producing that telemetric data and that needs to be stored somewhere. So you will have to pay for those charges. If you don't want that, you could use a open source solution, which is Prometheus. Has anybody heard about Prometheus? Could we take these into power BI? It should be possible because this is not something specific to AKS. The solution is at the Azure level. So if you can query other Azure services, it should also be possible. I haven't tried it. Okay, so anybody has worked with Prometheus or Grafana before? Have you heard about it? Okay, so Prometheus is an open source solution, which is part of Cloud Native Computing Foundation, CNCF. So if any project is there, which is adopted or which is supported by Cloud Native Foundation, it means that they support all these Cloud Native technologies and it has better support as compared to other open source projects. These are bound to work very well with other Cloud Native technologies and that's one of the reasons why it is being adopted and supported by the CNCF Foundation. As I said, it's open source. It's very good at handling the time series data. So it has optimal kind of storage and retrieval patterns when it comes to storing time series data. And one good feature Prometheus has is support for alerting. So it's not just doing like monitoring, not like your usual what happened before. It could also look at some of the metrics and it can give you almost near real-time alerts. You can configure this. Then there is Grafana. So let's have a look at Prometheus first. So I've deployed Prometheus and Grafana which are again available as Docker container images. And this is deployed as part of the monitoring workspace or namespace. And it gives me two endpoints. One for Prometheus, one for Grafana. So let's look at Prometheus first. So this is the UI which is provided by Prometheus. If you want to have a look at some of the metrics that Prometheus captures. So let's say we want to see CPU, we want to have a graphical view. So this is again similar to using the query syntax of monitoring and having a look at what's happening and then building like a small dashboard or small visualization around the metrics. Better part is the usage of Prometheus here. So instead of doing all this stuff on your Prometheus queries or metrics, we can use Grafana. Now Grafana as of yesterday has support for around 47 data sources. They say wherever your data is, we can query it. It has 39 different types of panels. So you have as you can see here, there are maps and these beautiful visualizations which come built in. And then as an output or dashboarding, it supports various dashboarding tools, more than 1200 as of now. So some of the most common ones are Elastic and InfluxDB. So let's look at the Grafana dashboard. So here are some metrics it has collected from the deployments that have happened. I can use the quick ranges. So I can say give me all the deployments that happened in the last seven days for a particular namespace. So I can choose this. I can say for a case. This is just the deployments, but there are other things like the capacity planning. And you can see how rich the visualization is here. This is all built using the built-in visualization tools provided by Grafana. And that's one of the reasons why it's quite popular. So how did I build this? I'll show you the code quickly. As I said, this comes as part of Docker images or they provide Docker images. And the part that I like is it gives us infrastructure as code capabilities here. So whatever we create is all documented. It's all descriptive in a very descriptive manner. I can define what my Grafana dashboard should look like, what are the visualizations I want to have. I have the full control. So it's not like somebody built this and let's say he's not available and you don't know how he built that particular dashboard. You have to wait for that person to come back. You can go back here. You can see what are the components he's using. You can customize it further if you want. This is what I like about Grafana as well as Prometheus. So for Prometheus, I have all the YAML files here. So it's all like what we did earlier in case of normal application development and deployment using the YAML. The same concept is applicable here as well. So we looked at the demo. What are some of the mistakes or tortures that I found while developing this application or in general that I find. When it comes to Docker, one of the most common mistake I personally do is I go on the terminal and say Docker run the command but you realize Docker demo itself is not running. So that's one mistake. The other one is typos. If you go back and look at all this code, it is mainly key value paths. So usually you will find people making small typos here and there and you might spend quite a lot of time trying to investigate why you have the let's say SQL Server container running but your API is not able to connect to it. Maybe the service has a name which is not matching the label. So that's one of the common mistakes I see. When it comes to the Kubernetes command line incorrect context. So usually you will have like a different context when you have Docker for Mac or Docker for Windows running on your laptop and you connect to the let's say cloud cluster or on-premise AKS cluster. The context might be different and you might deploy an application into one context and you're trying to access it from the other. It will not work. It's not in sync. Other one is missing namespace or wrong namespace. So if you're using namespaces to logically segregate your clusters you might miss to have a namespace in one of your application deployment file and again the same thing. The service is deployed, it's running. When you look at the dashboard it says it's running but when the application tries accessing it because the namespace is not matching it will not be able to work together. On the AKS specific things APIs is sometimes difficult to manage or difficult to remember because nowadays you see most of the software they get released quite often and documentation sometimes may not be the most relevant one. So you might find an API which is documented but it doesn't work with the current version. The last one I found was default role-based access enabled for recent versions of AKS. So for almost six to eight months I use a script. So if you look at my code base I have a deployment script or PowerShell script which deploys everything for me. So it takes all the parameters like which subscription I want to use what is my resource name, resource group name what is the cluster name and all that and in one shot I am able to provision all these resources. So if you look at this line 35 if I don't have this line in the newer versions you will find that the cluster is created and you deploy the application it doesn't work it doesn't get deployed because role-based access is by default enabled with the most recent AKS clusters on the system. So that brings me to the end of this talk here are some references so the demo code is available on GitHub there are links I've provided for Kubernetes Playground Azure Monitoring Solution that I used the container monitoring with log analytics this is another solution where if you're not working with AKS but you have other container solution like docker swarm and you still want to enable monitoring solution you can use the third option then Prometheus and Grafana and then I've provided couple of cheat sheets so whatever commands I will show you you can find them readily available in these cheat sheets. For the references all the slides for the previous four as well as this session will be available on these two links I usually post them to speaker deck and slide share and here are the reviews from the previous session so I'll be sharing it anyway I'll give you the link up so you can find it there so the earlier videos thanks to Alan and engineers SG they are available on engineers SG website I've also provided the links to these videos on my blog so you can find it there so thank you thanks a lot for taking time for this session that's the GitHub URL if you want to have a look at the complete source code if you have any questions you can provide some feedback that could be helpful there are only like four questions while I take questions if you can put it back that would be great so how about the Prometheus services right same way they are probably cloud also providing the Prometheus services yes so which one is there better option or is I just kidding you are not an option nowadays everybody is providing Prometheus so Prometheus is like the most popular container orchestration platform at the moment so you'll find it on on premise as well as cloud you'll have it on AWS Azure HP's private cloud IBM wherever you go you will have support for AKS it is based on the open source Prometheus version so if you don't want any of this you can as well go directly to the open source project you can start it but the advantage of using a managed service is you get all the support around that so you have the full base security you have this monitoring solution now with one click I can enable monitoring for that Kubernetes cluster if I don't use these managed services then I would have to spend that much amount of time effort to build those additional capabilities and that's the additional work to work for me so it really depends on where you want to go I know some people like one of my ex-colleagues he would say I want to start right from the scratch I don't believe in Google I don't believe in AWS I want to know what happens right at the core level and he built the Kubernetes cluster right from the open source world that's what's it is available for Prometheus it is on AWS on AWS Kubernetes you can using Docker deploy the Docker you can you can deploy Docker with the Prometheus version you need to enable hypervisor and things like that with Windows 2016 it comes by default so you have better integration with Windows 2016 what do you want to talk about I have different questions I just have a quick one on your monitoring I think given the affinity to Power BI I think it looks like a great solution on Power BI so really quick for you know I am part of the Microsoft community as well part of the SQL and Power BI is about the one thing we want to do is here as Malaysia's walkthrough stuff we would like to capture it in Microsoft teams have you guys heard of Microsoft teams right and then you can actually have a guest account where we are all in one group and they will have a tab for you know containers for example so we can keep the conversations contained any uploads can be done right to teams you know