 Okay, so what we'll be doing during this session. This one is like a six part series, which I'm trying to do, before I start on the six parts, I'm gonna talk about myself. So my name is Nilesh Ville. I work as an architect in one of the insurance companies. I write my blog on handphonearchitect.com and I have to pick up a repository, which are linked in, you can easily find me using these social media, so what is this series all about? We'll be starting with the AKS learning series. So I'll be using Azure Kubernetes Service, which is newly launched, it was in preview for quite a long time and last week it went into GAA. Okay, okay. So this is like six parts, although it was mentioned in the data set we'll be using ACS, I will not be using ACS, I'll be using AKS, which is Azure Kubernetes Service. The first part is today, which is purely Docker. We'll see how to containerize applications using Docker. Very basic introduction. I will start with writing an alternate application and put it into the containers. So if you already know a bit of Docker, this would be like a kind of application for you. So please bear with me during this session. For the next one, we would use Docker Compose, which is a mechanism to stitch multiple containers together. And then we will start using our Kubernetes. So we will use Minikube, which is a local version of Kubernetes, to test locally on a single machine. And the last three parts, part four, part five, and part six will be all on Azure. So then we will deploy onto the managed cluster and part four, part five will be about debugging and monitoring, and I'll be using again the OMS, operational management solution, provided by Microsoft and Azure, to monitor and to be called the containers, the old cluster. And as a bonus, we would also look at the CI-35 line using Docker and Kubernetes in the part six. So what the application looks like? It's a very simple and dotnet application. So if you were there at the Azure boot camp, or by chance at the work space, it's the same application which I've been using for my earlier demos. It's a MVC application using dotnet core. It has like a list of talks you can take, and you can then create new talks in it. And it uses SP.net core MVC, dotnet core has a web IPM backend. These two communicate to one another, and then there's SQL Server 2017 running on Linux. And all this, we would be using it inside the Docker containers. Okay, so without spending any time on these slides, let's see what we need to get started. So I've created a depository on GitHub, and I'll be using this, this repo, a case planning series. So you can find support after the session here, and you can download it, you can enhance it if you want. So how do we get started when you want to containerize the application, or how do we find the right source of information? To get Docker, you need the Docker engine. So that's what I put in the prerequisites. You need to have a local installation of Docker, either Docker for Mac or Docker for Windows. And that's like the service which is running. And then you can have images which are pulled from Docker up, which is a Docker registry. We can search for images, like if you have the client running. I can go here, and I can first verify that I have the right version of Docker. So this is the Docker command line. So give me what is the current version. I can also use the long syntax, like, version to get the current version of Docker. I can also get the Docker info, and this will give me a lot more details about what is the current version of Docker installed on my system. So to search for images, I can use like the base images provided by someone like Microsoft, or if you're using Java, then Oracle, if you're using Go language, then from Go. So let's search for some of the Microsoft images. I can do it from command line, Docker, search, Microsoft, and if I'm looking specifically for, let's say, a spinet code image, and I'll get a list of all those images which are provided by Microsoft. And you can see the stars that are associated with these images, whether it's official or not. So there are people creating their own images and putting it on Docker Hub. You can either use the official image or you can use the custom built image. The other option to search for images is to go on the map, and you log in to Docker Hub. You can search the same image here. So Microsoft. So you should get the same results, whether you do a search using your client or using the web UI, web interface, and then you can find the details about these images. So if you go on the spinet code image, you can find information like when this image was published, which are these supported operating systems where you can use this image. What are the different tags associated with this image? You can also go to tags here to find out additional details. Like when was the image published? What is the size of the image? You can also find out how the image was built. So on the web, you get a lot more details as compared to that of the Docker CLI on the client side. When it comes to the dotnet code, there are various versions of the images and sometimes it's difficult to understand which one to use. So recently there was a very nice article published by Ray. I have this in the references so you can look at it. Here it explains for which purpose you need to use which kind of image. So basically you have dotnet 2.1 runtime image that is one with dependencies, there is a spinet code 2.1, which was recently released, and there is the SDK image. So the basic difference here is you use the SDK or runtime dependency images during the build time. And then when you want to deploy the code to production, you don't need all the build tools in your production version. So you use the image which is optimized for production. So let's start with the application and we'll build in some of the core concepts here. So again, I'll use the dotnet CLI here. This works on Windows as well as Mac so you can follow the same commands. So I can verify which is the version of dotnet I have. It's 2.1, it can also do like Docker version, the dotnet version. So if I want to create, sorry, dotnet version. If I want to create a new solution, I can use dotnet new salem command, I can read the name. So let's create a solution called the case learning series. So this will use the report template and it will create a solution for me. So there's just a solution file. I can add a project to this using dotnet new. Let's create a NBC application. So if I don't know what is this syntax, I can just say dotnet new and help and it will give me all the options for the new command. So I'll start with the NBC application in the front end. So you can say dotnet new in the scene. And let's call this Pettox, the controller, model, program, startup file, the project file and default settings. You couldn't get the command. Inside? Are you at the source, as I see? I think it's completed. So I didn't have an option to install the system in my own. Okay. You're using windows linux. That is dotnet version for linux. You could not install that. I couldn't get it. Oh, okay. Alan will help you with that. Okay. So we have the application created. I can use Windows Studio code here to open the application. So let me just go one level up and code in the current directory. Okay. So this is the solution file and this is the project created. Now let's go and build this. So to build, I can know it in two ways. I can build the complete solution. I can run your document build command and if you could build the whole project or solution, sorry, or I could go to the original project inside the folder and I can run the same command. It goes by the convention code. So it takes the CS project file and the project file. Okay. We'll be successful. So far so good for those who are following. Okay. So let's run this application and see what does it come up with. So to run, within the same folder, I can say dotnet run. So the application is up and running and it's running on localhost 5000. So I can go to this. That's the application which is created by the default template of document code. Okay. So now we have the application ready. How do I put it into the container? To start with, I need to describe what is the mechanism using which I'm going to package this and Docker provides us with something called as a Docker fight. It's basically a template which tells us how to package the application with all its dependencies. So let's look at the Docker file. For example, I'll put this at the same level where the addition folder is. And by convention, I name this file as Docker file. So it starts with from from basically tells which is the base image I'm going to use for categorizing this application. So I'm going to use dotnet 2.1. 300 is the number which is provided by Microsoft. So this image is the official image provided by Microsoft. And it is the SDK image because I want to build the applications. I use the version of the image which has all the build tools into it. And then I name this as a build environment. So this is like a stage. I'm naming this stage as my build stage. Then I need a new get config. So this is basically the repository from where this application will pull all its dependencies. I don't have this right now. So locally, what happens is there is a default new get cache available, which is stored in the disk. So it's able to build. The dotnet framework knows where this cache is. But within the container, it doesn't know where this cache is at. So I need to specifically create this particular file called new get config and need to specify where is the dependencies located. So let's go to this file. Dependencies are at this myget package repository and the new get KTI package is located at this URL. So it's going to pull the packages from these two repositories when I add those dependencies into my project. Then I specify what is the working directory. So let's change this to dotnet.net or tech talks well. Then I do a dotnet restore. So this is basically looking at my CS project file. It's looking at what are the dependencies it has and it's going to restore all those dependencies for me. So if you're coming from Java background, this is like a main one installed. So it will get all the dependencies for you. Then I copy all the contents of this tech talks when directly into the container. And then I run the dotnet publish command and I do this in the release configuration and I put the output in the folder called release output. Once the output is generated, then I need to put this into the runtime version of the image. So that is again provided by Microsoft. I have dotnet 2.1, a student code, runtime version. So this does not contain any build tools. It is purely the dotnet framework itself. What is it? What application in production like environment? I create a working directory within that image and then I copy the build output from the previous stage. So if you look here at the build environment as my stage. So I say from build environment, take the release output and copy it into this new image. And finally entry point is what should be the first BLM or the executable that should be triggered when the new version of this image is created. So let's go back to the command line and so the Docker one. Let me show you some of the images that I have already. And if you have Docker installed, you can run the same command. So I can say Docker images. So this will give me all the images that I have locally on my machine. What is already downloaded from Docker Hub or the ones that I have created myself. And if you look at the Docker commands, so let's go back and stay in Docker Hub. There are various commands which are provided like top level management commands and there are also specific commands for containers and images. So I used a top level command here, Docker image or images. Same way I can use like Docker network or Docker system commands. Let's look at another command which is commonly used which is Docker containers. So when I run Docker container LS, I don't get any output. This is because I don't have any container running. So what's the main difference between a container and an image? Anybody who has used Docker can help answer this. Basically the top of the image is the file that contains your application and it's the end of the system. Container is the running part of the image. It's like object in a glass. Yeah, exactly. So image is just a template. It's like the complete application containerized within a container or like a box. And then when you unpack it, when you run it, that's when you get a container. So let's create a container. But before I create a container, I should use this Docker file that I created just now and build that image. So let me go into the folder. I'm at the top level. Let's go into that code spread. Here you can see that I have the Docker file. So I can run the Docker build command. I need to give a tag for this image. If I don't give a tag, let's build one without any tag first. I say Docker build and dot. Dot is the current context. So you can see the steps that it's following. Whatever I described in the Docker file, it will execute those steps and it will create an image here. Okay, it failed. It failed because it could not restore the... This is not a project or a solution. I think it will be popping into a net restore and what is the solution you find inside? Is it because you change what the retriever is now? I changed it to what is the current one. Okay, I'll leave it here. This should be a textbox for this one. This should also be a textbox for this one. It's already inside that. So it's running at the same level as textbox web. The Docker file is at the same level. It's copying textbox web inside the container. It's doing a dot net restore. It's copying a dot net of this. It didn't reach there. It failed even before that. Just after the restore. You copied first and then ran. You copied first and then you still ran. I'm copying. Okay, let's copy the object here. Let's copy. What about that? Yeah. Yeah, yeah. Okay, that's good. That's it. This is happening because on my local machine, I have the bin and the OVG folder. This should not go into the container. So like we do with Git ignore files, we can also specify for Docker ignore files and which directories should be excluded while building the image. So I need the bin and the OVG files to be excluded. So I put again at the same level of file called dot occur. You know, it builds successfully. So I can now say Docker images. And you can see image build without a tag. Get out the repository seven seconds ago. So ideally what we do is we tag the images. So what we do is in this command, I can specify a tag. Tag basically contains the name of your repository and then the name of the image. So let's say this is tech talks web. And I can also specify like a version number if I want to build and put images. So now I have the V1 version of tech talks web image built. So to run this image now, I can use the Docker run command. Let's say name V1, open our port here. I can map the ports. So 80, name of the image that I want to run is this tech talks web V1. So the application is started and I can go to the browser and say local post. And you have the application which is running from within the container now. Any question? Without any ideas? No ideas. No ideas. This has the built-in server cluster which is provided by a dotnet code. And it's hosting this particular application within that cluster server. To stop this, I can do control C. So you can saw that it took hardly like two or three seconds to start up the image and get the application up and running. If I want, I can again read on the same one and it would fail because there is already a container with the same name like V1. So I should delete this container before I can restart the same one. So to delete the container, I can run command Docker and then V1 deleted. And now if I run the same command again, it should start almost immediately. To stop this, I can run Docker's top command. You can also find various details about this image. So there is a command called Docker inspect which gives us a lot of details about what is inside this particular image or the container. I can run the inspect command for different things that Docker provides as the resources. So you can see on this container, it gives very detailed information about what are the arguments, what is the path of the image, what is the status, the current status of this particular container, which version of the image is used, what is the host path name, and a lot of other information. Let's remove this container once again and I will make a small change and create another version of the application. So let's go to the views and let's say I don't like that Corrosal control. Just comment out the whole Corrosal and let me go and create another version of the image here. So instead of V1, I will create V2 now. The version of the image is created. Let's run this, run V2, let's export the same port and running. Now you can see that the Corrosal control is gone. The advantage of having this is now you can run both the images side by side. So I can go back here and I can start the version one of the image. So Docker run, it's here to name V1 and just change the port because I can't use the same port here. So what I'm doing here is I'm mapping 8081 on my host to the 80 port on the container. So container still exposes 80 but I'm mapping it to a different port. And now I have the application here. We can do local host, 8081. And this has the earlier version of the image, the one with the Corrosal control. So you can imagine if you have like same environment and you want to test side by side two different versions of the application, it can be quite easy to use Docker. It has a very little footprint and it gives you that flexibility. You can really test these things fast. You don't have to wait for some other team to deploy your application. You don't have to wait for them to raise some ticket and go through the whole process. It is quite easy. So if I look at what is the size of the image? So usually I get this question when I do these kind of demos that, okay, what's the size of the images that you're producing? So let's say Docker image, images. So look at the V1 and V2 versions of the image. It's like 258 MB, that small. And if you look at the base image, the one that we use to produce this, it is Microsoft.net runtime, this version 255. 255 MB, that's the base image. And my application is like 3MB of additional code with less than 260 MB. I can run the application using this Docker. Any questions? So far, now let's build the second part, which is the web API. I'm looking right now at Docker. I can do Docker PS to see what are the processes which are on it. So there's one V2 version of the image running. Let's stop this Docker, stop V2. Instead of using the name of the image, I can also use the container ID. I don't have to give this cryptic name, like 76F fourth. It has to be just the unique one. So if I give like 76F, so this, it should stop the image or the container. So I can use both approaches, either give the container ID to stop or the name of the container. There's also a handy command. So if I go back again and say Docker images, you will see that there are some orphan kind of images with none in the repository name, none in the tag name. So these are like intermediate images which are created when the build happens. And I don't really need them. So I can run a command, Docker system prune, which will delete all the stopped containers, all the networks which are not in use, all the dangling images and anything that is not required. So in one command, I can delete like all those orphan images and orphan stuff which is like that. You can also run individual like Docker prune network, Docker prune container, Docker prune image, specifically, but this is like one command which contains the whole system. Let's try to push this image to be repository, Docker hub. So for that, I need to log in into Docker hub. Use the name is correct. So this is full and I can push, let's push the retool version of the image. So Docker push the image, Docker hub, v2. So if we go back to the Docker hub, I should not have this image. Please sign in. Look at this. I don't have a v2 version of the Docker hub. Let's go ahead and push this image to Docker hub. First time when you are pushing the image, it might take some time. In days from your network speed, it could be bit slow. But if you have the image version already pushed to Docker hub, it just pushes the link up. And you don't have to worry so much about subsequent push. So the push is fine. And if I refresh here, I should get the v2 version to see it somewhere. Now the v2 is just to see this one. If it has a tag. Yes, we can push. So you can see here v2. So if you want to try, you can pull this image right now, whoever is having Docker. And you should see the same output that I showed you. Even if you don't have Docker Core installed on your machine, just with Docker, you should be able to see the same output. Yes, everything will be there on GitHub. Let's do it right away from this room. Now let's bring the WebVTX. So I do cabinet new. And let's take this image for the WebVTX project. It's WebVTX. So the next thing is WebVTX is the name of the project. Project here. So we'll have to do the same steps of creating the Docker file. Let's copy the ignore. So the version of the package is Docker file v2. And put it inside the API. The ignore would remain the same. I would just change here the name of the project. I will straight away go ahead and tag it. And let's not give any version this time. So my default what it could do is it could version it as latest. So if I don't specify even v2 or any number, when it goes, it would put it as the latest version. So I keep the new.get.conf in this room. It's here. The tag is given as latest. And I don't specify anything. Let's run this. Let's give it v2 as the port here. To default values, which is what we have here in the get request. So this is the HTTP get that it's hitting that point. And it's returning these two minutes. Also good, or did I lose some of this? Okay, so let's move on to the third part, which is containerizing the SQL Server database. So for this, I will use the SQL Server image provided by Microsoft. So this is SQL Server Linux version. So for this, I'm not going to create the image. It's already the image that is provided by Microsoft itself. I'm going to reuse it. So I don't have to build anything inside this image. So let's create this. So what I'm doing here is I'm straight away running my Docker run command. And then passing three environment variables or two rather here, except end user license agreement as yes. SA Password, June 2018. And then which is the port that I want to export the default SQL port 1433. I'm naming the container as SQL one. And then I'm using SQL Server Linux 2017 latest image. I will come to the dash in the flight in a while. So this is up and running now. So SQL Server is running. How can we verify this? I will use the SQL operation studio, which is a cross-platform tool for database connectivity connection setup. So here you can see I'm connected to the SQL Server 2017 now, running inside the container. And it has the default system databases. Is that? Yes. You didn't mount the volume? No, not in this version. We will talk about volume in the next session. But when you run the image, SQL Server image, it doesn't get the database created without the volume. It's creating inside the container. Is that a container? Yes. So it will disappear if you. Yeah. Okay, so let's run the database creation command here. I have a small script, which creates the database. Nothing fancy about this, it checks if the database exists. Creates a database called Tech Talks TV, and then it creates three static tables. One is the categories, categories of talk, some terms, create conference, the conference, and that. What is the level of the talk? And it also compilates the default talks here. Let's run this. I have the database created and also populated. The same. The way you work with the normal SQL Server. Here it's running inside the container. So this is one way. There is another way where you can also connect with it using the SQL command line tools. So I have a blog written about this. You can use my blog, and there's a couple of blog entries, specifically for SQL Server 2017, how you can use it with your application, how you can customize it. So there I'm talking about not running a script like this, but running the script as part of the container startup itself. So when the container starts, it will have this script executed, and you will have the database up and running. So let's go back and talk a bit about the detached flight, the iPhone D option here. So what this does is it's running the container in the background. If you give D, you will just see the ID of the container coming out. You don't have interactivity with these containers. So if you want to interact with this container, you can use the exact command to go and create like a shell inside this running container. So I can give the names, SQL one, and what is the command that I want to run? So it's the bash command. So I'm inside the container now, and I can run the normal commands here. So this is running all these commands inside the container, and not on my host. I can come out by exit. So it will bring me back to the host, and I can still see that the container is running. So Docker PS, that's one option for Docker container LS. So it should give me the same output. If you want to start the container in the interactive mode, that also is possible. You just change from minus D to here, ID, to run it in the interactive mode. So what would happen is the container would start, and you would get a terminal inside the container, and you are directly into the container. You can run the command without doing this exact stuff. Any question? I send a database initialization file. No, no, they are the one in the Docker command line. Oh, it's bash. There's so many of them. This one is the item. It's the terminal of Mac. That's why it stickers us. No, there's so many of them. I want to, I didn't see if I could run it. Oh, this is because it has a history. So if I run this command in the past, it maintains the history of like. So there's so many lines. Can you save me to run it? That's right. You can. You could put this whole command into a bash file, and yes, you can execute that. Yes, like a shell script. You can do SH. And then the name of that. The extension. Doctor, when does it run? For running this command from a bash file, it would be dot SH on Mac. And on DOS, it would be like, dot lead, back. Also, look, you're not doctor, run, and then the fun name, doc. No, that option is not available. I will show you in the next one, where you can get away from this using these other compost binings. But for now, for the beginners, let's do it this way, so that you understand what are the commands which are going to execute it. Yes? So question, how do we configure the networking part into Docker? For example, I want to just map in basically, you know, address, configure. I'm not a network guy. So I may not be the best person to answer this. But what I know is, Docker has four different types of network. One is the host network. There is one bridge. There is overlay. And there is another one. Remember? So basically, these are the four different types of network it allows you to create. So if you use host, it will use the host networking. If you use bridge, it creates like a bridge. Let's say there are more different nodes where you are creating a cluster. So it will basically create a bridge between those four. Overlay, I don't remember exactly, but it's something similar. So when you are in cluster mode, you either use overlay or the bridge network. So there are like three or four different ways in which you can handle networking in Docker. Another question is, what's the advantage or was the use case of having a database in a container? For speed. How many times do you wait for your infrastructure guys to create your machine, install the database, and give you access to that? How much time does it take you if you're starting a new project and you need a SQL server database? That depends on your database size, but if your database size grows, do you think it's a good idea to have a container? No. Database within a container right now is even not recommended by Microsoft. So if you look at the container use case, it's mainly for development purpose to accelerate your development. You can use it mainly for your testing to speed up, to get that agility into your workflow of your development. But I would not recommend fitting it in church. So under a team, we can consider our persistence as gorgeous so that way we can track our persistence. Or we can turn it down. We will talk about position volume in probably four or five part of this series. Okay, any other questions? If not, let's go back to the presentation and see what we have. So this is what we saw, docker multi-stage build using the stages, like the build stage and then we have the release stage. We saw how to build the image, we saw how to run the image, how to push it to docker house, then this is typically the workflow which comes into picture. This is called the even group development workload. So we put the application, then we write the docker file, we create the images. We didn't get to the fourth part, which is the other compost. You will see this in the next session. Then we run the compost. So this is basically when you have a complete set of services and you want to run them together. Otherwise, you can do the same using individual software and commands. Then you test and once you're okay with all this, you push this changes to be impossible. So this is typically the workflow which is used in a containerized environment. Here are some of the references. So the demo code is, I pushed part of it and then anyone's, I'll push to get right away. If you don't want to install docker, but you just want to play around, if you don't have let's say images on your laptop, if you're at office and let's say you're not allowed to install docker, you can still do a very basic set of commands online. There is this docker playground. Then to get started, this is the docs provided by docker itself. There's a lot of things that we can customize with multi-stage builds. So I've given the link for multi-stage build. Then the difference is because of the images, the link which I showed you when to use which kind of image. And then there are some cheat sheets. So whatever commands I showed you, they're basically available as downloadable 3D files or GitHub gist. And there is 12-factor app I put here because this is one of the best practice when you're building microservices or container native applications to follow the principles of the Factor app. One of the principles of the Factor app is you should write the logs to the console. And that's one of the things we will follow in these subsequent sessions. So these are some useful links which I found good when I was building this product. And I hope it helps you. That is one thing which I can show which I've shown to show again with docker. So when you run the containers, I can see the logs of the containers. So if I do docker container ls, there is still the SQL server running. So I can go and look at the logs of this SQL. So docker logs and I give the name of the image. And you can see all the logs that are produced within this container. You can even tail these logs if you want. So this is again one of the useful feature which comes into the picture when you start debugging. Where is the log stored? This is inside the container at the moment. So we haven't come out of the container anywhere. Whatever I've shown today is all running inside the containers. In the later part we will see how we can store things outside of the container and how we can share stuff between your host and the container. So that's all for the day. Thank you. Thank you. Inside the container, not about the performance workload and anything of it, but whatever it is that you are running in, or you are running in that, if you can hold that question for part five, I will show you live. But basically what we can do is, your net code has this application inside its compatibility. So you install all your downloader package, you reference a package, and then you can produce elementaries from your application, which is available for outside consumers. So it's a specific package. That's a way where application produces elementaries. There are other ways where externally you can monitor the container and the application. I mean, there are a lot of applications. I mean, if you want to be a container, I'm not interested in that. I mean, if you want to have more of a insight to the application, yes, like you can work for a project, or what's going on, it's only the TV on the phone, whatever it is. For example, what's the issue? Yes, that kind of thing. I'm down in the process. The OMS project, operation management, sweet. Yeah, OMS. Yeah, OMS. That has a container native solution. So you can use that. So, a small request. If you could fill the feedback box. It's just five questions. Three are mandatory. Two are optional. It won't take more than two minutes. It would help me as well as the organizers to improve stuff in future. Thank you. I will publish the code on GitHub. The presentation will be again available on slide deck. I'll put the link as well as on GitHub. Hold on, hold on. I think he wants a scan. Okay. Before you go clean up the cups, if you brought in the cups.