 Yeah, it's there, it's there in these slides. So a bit about myself for those who don't know or were not here last time. My name is Nilesh Kule and these are my social media coordinates. So again, no need to take pictures. I'll be sharing the slides after the session. What are we going to do as part of this AKS learning series? We already covered part one, which is getting started with Docker. So we started with a very simple three-tier kind of application. This is the second part where we use Docker compose to stitch the multiple containers that we built last time. Part three would be container orchestration using Kubernetes with MinuCube on the local single node cluster. Then part four is the AKS spinning up of the cluster, Kubernetes cluster in Azure. Then we will do debugging monitoring in part five and CICD with Docker and Kubernetes as a bonus in part six. So this is the application overview, what we are going to build or what we started building last time. The application is a very simple list, create, update, delete kind of an application ASP.NET Core. It has a create feature and it's built using ASP.NET Core MVC as a front end. ASP.NET Core API as a back end and SQL Server 2017 Linux and all three running inside Docker containers. So what we did last time was to start with containerizing the application parts. So we built Docker file for the web front end and for the API. Then we built the Docker images using Docker build command, then we ran those images. We pushed the images to the Docker hub. So today we will continue on Docker compose. We'll see how to bring all this together using a compose, what compose offers us. So that's like most of the stuff I had on the slides and let's go back to the coding bits. So this will be pure hands on. I will not show too much of slides here. So if anybody wants to follow along and have a problem, just raise your hand. So from the last session, I made some changes to the code that was already published to GitHub. So if you remember last time when we built those images, I told you this is what we would be doing like the interface and the API, but we didn't really build the user interface. We just started with creating the project and putting it inside the container. We didn't have any API communicating to the back end database. So since this is not the core part of this series, what I've done is I've already put this code from my earlier projects. I've integrated that into the code base. I'll highlight what changes has been done. You can look at the code in detail if you want. If you have any doubts, you can shout to me, but it is very simple basic code. So I don't think there is much need for us to go too much detail into what classes I'm using or what features of .NET Core I'm using. Let's not focus on that. Let's focus on the Docker Compose and Docker containers as such. So the changes that I did was to start with connecting the API part to the SQL server. So for this, obviously I need a connection string. So if you remember last time, I was spinning up the SQL server container using this command where I was creating a named container as SQL one. So in this, I'm using the same SQL one as the name of the data source. TectoxDB, which we created last time, username and password. And this is not the best practice putting password in clear text. So don't do this in production. This is more for the demo. And there are better ways of handling this. And in future parts of the series, we will see how to handle that. So this is your typical .NET connection string, which would be similar if you are doing Java or any other programming. The next part was to add the package reference to the core SQL server entity framework and then to hook this to the connection options. So let's look at this in the Visual Studio code itself. So this bit has been done inside API. So I added in the application settings a connection strings, a default connection string. And this is the connection string which I was showing you just now. And in the startup, in the configure services, I have the DB context where I'm adding the use SQL server and using the default connection, which I added in the settings. I'm creating a context. So this is how I'm going to communicate with my database. Then I have the models, which is like the representation of what I have inside my database, the entities for that. So this is exactly what is stored in the database as a persistence layer. And this is the DB context, which I'm initializing in my startup program. And then there is the model, which will be used by the UI. So this is a completely denormalized kind of form where you have all the fields that will be used on the UI side. And then you have the controller, which is doing all the kind of heavy lifting. So it's again, very simple.net basic API, web API. So you have get all which talks to the context gets all the list of your tech talks. It also includes the category and the level. Then you have a single fetch. So this is like select all and this is select by your ID. You also have create to create new tech talk and there is the delete part update and delete. So once you have the API, then the next thing is to connect it to the front end, which is our web project. And here I have the home controller, sorry, tech talks controller, which talks to the back end API. So I'm storing the URL where it's going to connect to the API. And then it has the list method, which is the index, like the default method. And there is details, create and delete. Okay. And then there is the representation on the UI side for the detail. So all these changes are checked in into the GitHub repository, which I shared last time. So if you want, you can pull them and merge with your own repository if you are following. I also have a view model, which is representation, which is used to bind like a model which has on the UI side, you are building a strong type model unit building just like loosely coupled objects. So this is what will be shown on the default screen. And in the views, I've created a tech talk view, which is basically binding to that DTO and showing all the details. So these are the changes which have been done to integrate this UI elements and to bind them to the API. So to start, we can check what is the version of Docker compose I have, I have 1.2.1 U.S. might be slightly different based on when did you update. So why do I need Docker compose? Docker compose, if you remember last time, I think ma'am you are asking me instead of typing all this command, can I create a single shell file or batch file, something like that. And if you remember, I told you I'll show you last time, next time how to do it. So that's one reason. But the real reason is we can combine multiple services. So let's look at a sample Docker compose file. How does it look like? And what benefits we get by having compose is the font size, okay, or do I need to increase a bit? Good. Okay. So, so we start with a version, there are multiple versions of Docker compose. So in this case, I'm using three. Then we have the services tag. And we list down what are the services we want to compose together or what are the services on which my application depends on. So for the time being, let's keep the SQL client. We will use it when we go to Kubernetes and the future sections. Let's concentrate on this SQL data. So this is like one service I need. I'm saying the image for this is the one which is provided by Microsoft, which we have been using last time. What is the name of this container? What is the port? What is the environment? If you look at all these, these are basically the things that I was typing by hand. It is exactly the same command like Docker run. What are the environment variables? What is the port? What is the name? And what is the image? So this information is now put as part of my different attributes of the service. TTY, I'll come to it later. Then we define Tech Talks web as another service for the front end. What is the image for that? What is the build? So you remember last time I was doing Docker build and giving the Docker file or going into the application and doing a Docker build. So this is where the context for build comes into the picture. Where is my Docker file for Tech Talks web located? This is with reference to where my Docker compose file is. So I'm giving a relative path here. I can also specify the dependency. So my Tech Talks web, the front end depends on the API. So I say that it is depending on a service name Tech Talks API and what port I want to expose. So this I'm exposing the default port or AT. I'm just mapping it. Almost similar for the Tech Talks API except for the ports where I'm using AT-AT. And what is the connection string? I wanted to cover it later, but I can also put environment as a connection string. So whatever connection string I put in my app config, I can overwrite it here. So this is like your runtime configuration for the connection string. And by defining all this, I have one single place where I describe what are the different services my application has. Once I have all this, I can go and do Docker compose commands. So I have the Docker compose file here. I can say Docker compose. If there is a file name Docker compose, it follows a convention. So I don't have to give the file name. I can directly say build and Docker compose will build all the images that are required or that are specified. All the services that are specified in one go, I can then compile all my images. I can build all of them together. I have all the images built and if you look at this, it would follow the order based on the dependencies that we specified. So for the SQL client, we did not put any dependency. So it's like a standalone image. It doesn't depend on anything. So it picked up SQL data and SQL client. But web depends on Tech Talks API. So Tech Talks API should be built first and then the web. So you can see it knows what are the dependencies that we have specified. And even if we put the order in any way within specifying our dependencies in the Docker compose file, while building the images, this is a build time configuration. This is not runtime. While building, it will take these dependencies into consideration. And then I can spin up all these images together saying Docker compose up. So we have the web started. We have the API started. SQL client is also started because it's defined there and the SQL server. So all the four services which are defined in my Docker compose file with just one Docker compose up command are up and running. So I can verify this by going to the browser and saying local host. So now this is connecting to the backend and it's fetching the data. So we can see the logs here and there is a failure. The reason for this is we started this SQL server but we didn't initialize the database. So when it tries to fetch the data, actually this is not coming from database. This is the static data I've put saying if there is no data coming from the backend just show two dummy records. But let's fix this. Let's get our database initialization script and run it on that SQL server. So if you look at databases, there is just the system database. So now we have the database created and if I go and refresh this, this data is now coming from the SQL server. So we can verify that it's really coming from SQL server. So we have these three records and this is what we see in the UI. Let's create one dummy one. Let's say a paid conference expert create. So now we are able to write data back to the SQL server. So all these three are running inside the containers now. Any question? Yes. If we compose this Docker. Good question. You have the logs shown here but this is for all the services. You can go back to another terminal Docker PS which will show you all the processes and then let's say for SQL one, I can say Docker logs SQL one. I can find the logs for that particular running instance of the container. I can do the same for the others as well. Yes, it's normal Docker. All that I'm doing is instead of running Docker build, Docker run, Docker run separately, I'm composing them all together and with one command, I'm able to start all the services. I can stop all of them. So let's stop and see. So if I do control C, it will stop. But if you look initially when the image started or when the up was done, we had some resources created. So it starts by creating a network and that network is not stopped. So to stop the network as well, we can use the reverse of up command which is down in this case. So I can say Docker compose down and then it will bring down all the services. So this is like doing the clean way. So if you want to bring the whole things down, this is how you do it. And we can verify that nothing is running still. So Docker PS, everything is down now. So if you look at this, I find it sometimes confusing having one Docker compose file with all this information. I'm having images which are custom images which I've built. I'm having images which are pre-built like the ones provided by Microsoft SQL Server in this case. I have the information for build for some cases. For some, I don't have the build information. It can be quite confusing to have everything in one. So there has to be a better way with these new technologies. And obviously Docker does offer one way, which is that's where I feel Docker compose shines. You can split this into multiple things. So I can split this whole definition into different files. I can say just plain Docker compose, which is like common across all the files. I put all my services here. Then I put what is required for building this. So this is my build, the images that I want to build. So these are my three images which I want to build. So I put what is the service name, what is the build context, what it depends on. As I said, depends on is only respected during the build time. And then there is the build context for API. And then when I want to run this, I have a run configuration. So once the image is built, I don't need that build context again. I can specify what are the parameters which are required for running this particular container. So here I use the same service name, but the configuration that I specify is for run time or running the image. So this is much more cleaner approach of handling this. So how do we build using this now, this approach? So I have the three files here. I use the same Docker compose command, but this time I will specify the files that I want to use. So I do Docker compose, instead of run we start with the build. And instead of down, we need to build all those images. Sorry, so now we have built the images to run them. I can use the Docker compose and the run version of the ML file. And we use the same up and down commands to bring everything up. So I need to refresh the database again because the container was stopped and recreated here. You can see that the Tectocs DB was created here. And then if we go back, it should have three records. So this is the. Yes, for down you just stop what is running. Yes, I am just separating the files here. The end result is same, there is no change in the whole process. So yeah, if I want to bring everything down, okay. The last part which I wanted to show here was the service discovery. So when we use Docker compose, if you look at the compose definition, it creates a sort of software defined network. So when it starts, when we do up, you can find here that it's creating a network. And this is like an internal network which Docker compose creates. And using this, it gives me the flexibility where I can then use these service names directly. So instead of giving SQL one as the name, I could just use the service name here, which I am defining in my compose file, which is SQL.data. And just to confirm that we are using the environment setting that we are setting in this place, let me go and remove from the API the setting completely. Here I'll just make it blank. So we built once again. So now you can see using service discovery, it's able to communicate to those services. So I don't really need to know the container names and stuff like that. So that's all I had for today. Let me check if there was anything on my checklist to show. I showed you how to connect to the SQL, create and delete compose using single file. I didn't show the push. But again, if you do compose, Docker compose push, whatever images are built will be pushed all together. We saw how to use compose with multiple files, environment variables, service discovery, we saw end to end. And then let's just commit everything together to the GitHub repo. So if anybody wants to pull this, you can do so. It's a command map to git commands, git add all, okay. So everything is available in GitHub. You can go and pull this info. So we saw all this, the single file compose, build, up, down, multiple files. Here are some references. So demo code, Docker playground, which I shared last time. If you want to play around online, you don't have to download Docker in this case. Some references to the Docker compose.