 Let us get started. Sorry for the slight delay. Am I audible at the back? Yeah. How many of you were there for the previous sessions? Yeah, I know you. Anybody returning from previous one? Okay, so then in that case I will do a quick recap of what we did in the previous sessions. Yes, we did five parts and all these five were recorded. So our friend Alan has helped us to record this. All the five are available on engineers SG. So you can view the recordings of past all five series in this. So I will quickly go through what we did so far. So that is little bit about me. So since many of you are new, my name is Nilesh. I am a Microsoft Azure MVP and those are my social contacts. So if any one of you wants to connect with me, feel free. In this series so far we did Docker in the first part. We did Docker compose in part 2. We did container orchestration with MiniCube in part 3. Then we moved on to AKS. We deployed a multi-container app which is three containers basically web front end, web API and SQL server 2017 running on Linux, all inside containers. In the part 5 we looked at debugging and monitoring using OMS, the Azure monitoring solution as well as open source solution in Prometheus. So this one is part 6 and we will look at CI CD, continuous integration and continuous deployment of Docker containers and Kubernetes. So this I call bonus because it is not anyway related to the initial five parts. It is more like how can we use Kubernetes and Docker. The concepts that I explain here can be applied to any other project as well. It is not specific to the earlier five parts. So even if you miss the earlier five parts, that is not really a prerequisite for this one. So what we will be doing today is looking at how can we implement continuous integration and continuous deployment process with Docker containers and Kubernetes. Let us start with the state of DevOps report. How many of you have heard about the state of DevOps report? So this one is published yearly by a puppet and they basically try to look at what major companies are doing in terms of DevOps implementation. How are the tools, how are the technologies being used and here are some of the key highlights of this DevOps report which was published this year and they are trying to compare the allied group against the low performance. So we all have heard about companies like Facebook, Netflix doing like thousands of releases a year. We have gone on from let us say one year every two years, one release every two years to like doing multiple releases in a month and even days. So how are people doing these kind of fancy things. So these are some of the factors this report looks at and they try to give different coordinates on different tools and technologies. So as per the latest report we see that companies who are doing like the allied ones 46 times more frequent code deployments. There are like 2500 times faster releases lead time that is when you commit something into your code repository it gets deployed to an environment and you get instant feedback. There are seven times lower change failure rate. So if something goes wrong you can find very quickly that something went wrong with your latest changes and the times to recover. So in case of incidents people are able to recover almost 2600 times faster compared to old times. So I have given a link here. You can go back to that link and find more details about this DevOps state report. This picture tells you what is mainly DevOps. So it is not like different phases. It is a continuous cycle where you build, you plan, you deploy. You do continuous integration which is at the code level. You integrate everything together and then you go into a deployment which is continuous delivery and operation and then monitoring. So you do not just stop at deploy. You look at different matrix when you operate in production and then you incorporate some of these things back into the cycle. So here are some of the most commonly used or popular tools for each of the phase in this pipeline. So I would not be able to go into each one of this as you can see it might probably take months to complete all of them. Instead what I will do is I will show some of these concepts mainly around continuous deployment and continuous integration using Azure DevOps. So some of you might have heard about VSTS Visual Studio Online Services in the past that was the online version of TFS. This has been rebranded and now we have these five main core components which we call it Azure DevOps. So this consists of Azure boards for your planning, tracking. We have the repos. So we can have private repositories as part of Azure repos. Azure pipelines is where we define continuous integration and continuous deployment of the pipelines, how the code flows from your source control all the way to your target deployment environment. Then there are test plans and Azure artifacts. So if you work in like Java we have these different repositories or Go or even Node. They have the NPM or Java has their own repos. So this Azure artifacts is similar to that. So it is like a new get repo where you can store your artifacts of the release. So what we will be doing today is following something similar on the top. We will use Visual Studio Code as a code editor. The code for this particular series is available on GitHub. So I will be pushing a change or multiple changes to GitHub which will trigger a continuous integration. Everything will be built on a dedicated server. So it is not running on my local machine. So this is to make sure whatever I check in works fine on a dedicated continuously integrating machine. That would produce the Docker images for my application and these would get deployed to a Kubernetes cluster. Any questions so far? So let us get started. So what do we need to build a continuous integration pipeline? First thing we need is to connect our GitHub repository to the Azure DevOps. So this is the UI for Azure DevOps. I can do dev.azure.com already logged in. If you are not logged in you will be asked to log in and you will be brought to this screen here. And here I can create a new project. So I have just started a live demo project. So when I create a project I get all those things which I described earlier. So you have the boards where you can build your backlogs, prints, you can build custom queries, repos and the pipelines. And each one of these they have these sub components. So we will be focusing mainly on the pipelines today continuous integration and continuous deployment. So I need to start a build pipeline. So that is continuous CI build. I will first build my code. So to build it I need to define how do I want to build my code. And these are the different steps I define for building my code. So in Azure DevOps rather we have support for various things. It is not just Microsoft specific support. So here if you see for building we can build .NET, Android, Docker, Go, Gulp, there is Java support. So it is really a cross-platform support. And that is why one of the reason why Microsoft changes from VSTS to Azure DevOps. So now we are not just limiting to Microsoft specific products but you can also build other kind of projects apart from .NET and .NET core. And this is various set of tasks which are provided by Azure DevOps. So for testing, for utility, for packaging, these are by default which comes along with the box. And then there are others which are like Sonor Cube. Then for testing there is Hockey Apps. These are marketplace. So these are not provided by default by Microsoft but other players in the market. They provide these tasks and we can create or we can use these tasks in our build cycle. So as part of my build what I want to do is first thing to create those images, Docker images. So first thing is to connect my repository to this build. And here I have established a connection to GitHub. I give my credentials and I select which is the repository. So this is a case learning series and the branch which I want to build. So anytime there is a check-in happening into this branch, it would trigger automated build here. And this will pull the latest version of the source code as part of this agent job. I also define as part of my agent. Where do I want to run? So in the agent pool I can specify what is the agent I want to run my job on. Is it a macOS? Is it Ubuntu? Is it a hosted Visual Studio 2017? In this case, I will be using Ubuntu because I am using the Linux version of Docker images. So I need a Ubuntu machine to build those. And as part of the task, I want to build my images. So I start with, I can go here, add a task and select Docker. And there is a Docker compose task. So in compose, that is what I am using here for build services. I need to tell what kind of container registry I am going to use. So am I going to use Azure container registry or some other third party registry? So in our case, we are going to use a container registry which is Docker hub. And I have created a connection already to Docker hub. Again, same thing like GitHub. I need to give my credentials to connect this service to the Docker hub. And then I provide which is the Docker compose file. So this file resides in my GitHub repository. So this is my base file. And on top of my base file, I have a build file which is kind of separating my build images, the ones that I am building custom images and the ones which I am using from already pre provided images like my Microsoft SQL server image is provided by Microsoft. So as part of my application, I am not building it again. I am just using it. So I do not provide that as part of the build. So in my compose files, I am just giving those images which I am building myself. So the custom images which are part of my application are specified here. And then we specify, I will come to the build tag later. Why do we need it? With these, I can say take this compose file and build my images. Once the images are built. So these are built by taking the latest version of the code on this particular build agent. Images are ready. Once the images are ready, I need to push it to the container registry Docker hub. So I use the same Docker compose task here. And instead of using the build command, so in the first one, if you look at the action, I am using build action, build service images. In the second one, instead of building, I am going to push these images to Docker hub. So the other settings are exactly similar, but I say push. So this one, if the build is successful, it will go and push the images to Docker hub. So let us test this. So I can say save and queue. I can give you some save command command. So now it has started with the build. And we can see the live build output here. So it checked out the latest version of the code. And now it is using that build, the first task that I specify. So while this is building, let us go on to Docker hub and see what is the current version of the image that we have. So this is one of the image which I am publishing, Tech Talks web. So if we look at the tags, the last push was 33 minutes ago. So once this is successful, we should see a change in that last push version of the image. This takes a bit of time, about three to four minutes to build all those three images and then push them because it downloads the version of the image, base image from Docker hub. In which consist of sort and code, SQL database, SQL client and web API. So front end, middle tier and the back end. Yes. So while this is building, let us go on to the second phase. So assume now this images are built successfully and they are pushed to the Docker hub registry. Then how do we deploy it to Kubernetes or your target environment? So that is where the release pipeline comes into the picture. So I will go to one of the existing release pipeline which I have already built previously. And this again has a similar concept in terms of the interface. So what we do here is, first we take the output of our build. So that is your artifact of the build, which is the input to the deployment stage. So we have already built the image and now we are starting to trigger the deployment. So this is, we can say add artifact and it will ask from where I can select build, which project and what is the source. So this is my build name, the build pipeline name. So the output of that would come as a input here. So these are the artifacts which are available. And then as part of deployment, I will start to deploy. So this is slightly complex version. I will take the simpler one first, v1, which is just deploying plain Kubernetes. So here I am going to use a Kubernetes command, which is a kubectl apply. So I have already created a cluster. Cluster is up and running. So if I go to the dashboard, you can see here in my resource group, I have a Kubernetes cluster running, which is this AKS cluster. So I will be deploying all those images onto this particular cluster. So to deploy to this, I need to connect my build server or this pipeline server to the Kubernetes cluster. And that again is done through the connection. So if I go to the connection here, I need a connection to the resource manager, as a resource manager. So this is again, you just provide the details of your subscription, give a name, which subscription you want to use, and the Kubernetes cluster. So it should be this one. So once you give the subscription, you select which subscription you want to use. In this case, I am using a subscription called Microsoft Azure Sponsorship. And then the resource group. So this resource group is actually the resource group that I have specified in my cluster. So if you look at this, this is the resource group name, which will be populated. So the build server would be actually communicating with this particular resource group. And once I have that, I can connect to the Kubernetes. So for this, I need the configuration, which is again similar. I need the URL of the Kubernetes cluster, which again we can query using get the context. When you try to query the Kubernetes details using kubectl, you will get this server URL and the kube config. Same thing you can get from your code as well. So here, this is the file, which is the kube config. So when you go to a case cluster and get me the credential, it creates a kube config file on your local machine, which contains the cluster configuration. So in my case, I had created multiple clusters. So it has all those cluster details. But for the AKS cluster, if I search for AKS here, so you can find the server URL. This is the URL I need to put in the Kubernetes configuration. And then the contents of this kube file should go as part of this kube config. So this is how your build server would be able to connect to the Kubernetes cluster on Azure. This kube config, this is the contents, the whole content of this file. So this contains your cluster credentials, the certificates which are required, which user. So you can see the context here. You can see as well as the users. So it contains all this information. So this, when you put in the configuration here, it picks that information and uses it to connect to the AKS cluster. No, no, I need to copy and paste it. Yes, entire file. Or you can create another kube config with just the one related to your specific cluster. So the AKS cluster, I can choose only those settings, create a separate file and put the contents of that file in here. And then we can verify that the connection is fine. So now we have the connection to our GitHub repo, the Docker Hub, Azure Resource Manager and Kubernetes. So using this, we can go back to our release pipeline and start pushing the changes to the cluster. So in v1, there is only one task which I am doing. And the settings for this, if you look, I am running the apply command. So this is how we can run the exact kubectl command. So there is create, delete, exact. So if you go and look at the earlier demos that we did, we use kubectl apply. So I am using apply command here and I specify which is the files. So all the manifest files, so these are coming as part of my build output. So I copy all the AKS related files as part of the build output into a drop location and I am giving that location here. So I am saying whatever is there in AKS folder, you run kubectl apply on that. So if I go back to the code, you can find all the manifest files here. The one for Grafana, Prometheus, Web API, TechTalksDB, Web, the namespace. So all this is applied onto the Kubernetes cluster. So how do I get these files as part of the build output? So let us go back to the builds again and we were on. So this is part of the publish artifact. So if your build is successful, if creating of your Docker images is fine, if pushing that to the Docker container registry is fine, I am going to say publish the artifacts. So usually when we do publishing of the artifacts, if you have used TFS in the past, there is a convention that anything which goes as the build output is in the drop folder. You have a folder called drop and you specify what should be the content. So if it is a dot net application, you can put a exe, which is the output of your build or in my previous organization, we used to create MSI files as part of the build output. So this would go into the drop. But in our case, the output is actually the Docker container images which are already pushed to the repository. So we do not need to push those images again, but what we need is some supporting artifacts like the manifest files and that is what I am specifying here. So I am saying in the drop AKS folder, take what is there in this particular directory, AKS and put it as part of the drop. So when this build finished, now it is already done. If we go to the build output, you can see that it did the publishing and then it published the artifacts at this location. So that is how the build is completed. It puts those manifest files into the drop location and then the release pipeline picks it from the drop location and does the deploy. Is it clear? Okay, so let us go to the Docker hub again and see what happened to those images. Since the build was successful, now you can see the latest version was updated about nine minutes ago. That is when our build passed. So now we are able to build the images as part of the build process and also publish them to Docker hub. Now let us look at the cluster where it got deployed. So I am deploying this to a namespace called AKS part 4 and this is where it should have got deployed if the release went fine. So let us see at our release what is happening. So in the releases, we have this remote cluster. So this is the Kubernetes cluster. This is running on Azure. So here, 1851, 15th run. So let us look at the definition of this. It has the continuous deployment trigger enabled. So this is one setting which you can use for continuous deployment. So if you do not want every build to be deployed, you can disable it. So unless and until you come back and enable it again, it will not automatically deploy. It will just build and your release would be ready but deployment would not happen. So I have enabled this, but it did not run. So okay, that is because I created the build on the other project. So this project does not have a release pipeline. I was actually showing you a pre-built release. So there is no release here. Sorry about that. I went to the pre-built release one. So let us do one thing. Now let us go back to the place where I was saying I will do a code change and it goes all through the triggers. So these two pipelines are already configured to work with that trigger. Why do I have two? The first one, if you look at the simple one, v1 kubectl which deploys using a Kubernetes apply, kubectl apply command. There is a slight problem here. So let us look at the way it is doing the deployment. Is it visible at the back or do you want me to increase the font? Okay, little bit bigger. Okay, fine. Okay. So this is the manifest which is going to be deployed as part of that deployment. And if we look at the image, we are saying, take this Tech Talks web and I am not specifying any tag. So it takes the latest version of this image and it will try to deploy. Now kubectl or Kubernetes command line is kind of smart. It already sees that there is the latest version deployed on the cluster. And even if we try to deploy the same version, it says, okay, it is already deployed. We have the latest version. It will not deploy the updated version because it cannot make a difference there. And this is a problem with these kind of manifest files. These are like static files. We cannot dynamically provide a different version of the image if we are using this approach. So what is the solution? Usually what we do is add a specific image tag so that next time when the image is built, we are not just saying latest, we give a specific image number or a tag number. This could be like an incremental number or you could have a convention based on your organization. But it is to say between the two builds, there is a difference and you want to create immutable artifacts for each version of your build. So that is why if you look at the build definition, so I am on the learning series. If I go to the builds and if I look at the build definition, what I am doing as part of, so as part of the build and the push, what I do is I specify additional image tag, which is the build number. So this is unique for each build. And this is what I am adding to the build. So to the tag as an image. So when the image is built, it would have the latest version as well as it will have a build version attached to it. So let us trigger this build and for this, I will just make a small code change. So I have a carousel control here which is commented right now. Let me just uncomment it all this and then go and push this change. So there is one change here. Let us add. So this one is pushed to GitHub and it should be picked up automatically by this build. So you can see this here. So let this build go on and we will look at the second part of that build. So now we have the tag associated with that image. How do we deploy that? The tagged version of the image. So we cannot use kubectl, the command line because it works with the static tags. So this is where a supporting tool comes into the picture which is called as Helm. Around Kubernetes, there are various projects which are quite popular and Helm is one of the most popular when you work with Kubernetes. So my version 2 of the release pipeline, I am using Helm and let us look at the definition. So what Helm does is it acts as a package manager for your Kubernetes application and you can specify what are the contents of your release. Let me show you one of the example of Helm. Helm definition, what it calls as a chart. So in Helm concepts, it calls the artifacts that Helm produces as charts and it gives you templates as well as values. So that is by convention something called as chart.ml which is like what is the application version or API version we are using a brief description name and the service and deployment are similar to what we have in Kubernetes manifest. But this allows us to provide a template. We are not providing fixed values here and it is like if you are using any of the templates like ninja templates, this syntax would look similar to you or even YAML. So all that we are doing here is namespace name instead of hard coding it, we take them from templates. So in Helm, there is a concept of a Helm release. So we are picking those environment kind of variables or release specific variables and we also pick something from chart. We pick something from the values. So what we are doing is we define a template and we provide the values. So for values, we give default values. So these are the default values that it will use if we do not override and this is how we combine. So like image name, I can say use an image tag not just the image name. I can specify the tag here in the values like use the latest tag. That is the default but I do not want to use the default. So as part of my release, what I am going to do is I override that. So I had the Helm definition somewhere. So let me close some of these and go to the releases back. So here the first thing I need to do is to install Helm on my target cluster because Helm is not a default installation as part of Kubernetes. It is a separate project. It has its own installer. So as part of the deployment, we do the installation of Helm on our target AKS cluster. Helm needs initialization. So like Kubernetes init, it does a Helm init and we specify again the connection where we want to connect to what is the namespace we want to use to deploy this and then we start with the deployment. So I am going to deploy the database first Tectox DB. I need to provide what is the namespace. So instead of hard coding the namespace in my manifest file, I can now pass it as part of the Helm task. So that is one dynamic feature I have here. I specify what is the chart name. So again all configurations, I all pick all these items from the build artifacts. So as part of my build along with the Kubernetes manifest, I also publish those Helm charts and complete Helm directory what I have. And in this case, I am going to use the specific chart for database. So if you look at my code in the database folder, I have the template for database deployment. I have the chart for database deployment and I have the default values. So this is what I specify here. I specify a release name. I can specify a set of values as well if I want and then I also specify the build number as part of the overrides. So to keep this demo simple, what I did was since I am updating only the web part, I did not dynamically inject the build number into database and API but I did it for web so that it is faster the deployment. So if you look at this configuration, here we are saying set values image.tag and this would be the build number. So dynamically what would happen here is instead of using the latest one, it would pick what is the build number. It will use that particular build image or docker image with that build tag and it will deploy that. So we have got rid of hard-coded Kubernetes manifest files and overridden this with the build. So the change that I did, let us see if it went through. So this part, the build part was okay. So if you look at the build output, everything succeeded. It published the AKS files as well as all the help charts and then if we go to releases, we can see this release was 1948 started. So just couple of minutes back and it succeeded. So you can see it did the help upgrade of the database API and the tech talks web. So if I go back here and if I switch to the AKS part 6 theme space, I should see for the web part latest release deployed. So you can see here AKS part 6 was deployed about a minute ago and then we can access this and you have the carousel control. So that was the change which I deployed all the way from making the change on my local machine pushing it to GitHub repo and deploying it to the cluster as part of a continuous build and release cycle. Any questions? You are talking about these files, this one. That is the default one. So it is when you install Kubernetes on your local machine, it will create a dot file like a config file and that is what it refers to. So like your DOS files like a profile file, that is the profile for Kubernetes. Yes, so as part of my build output, I am pushing also the, so if you look at this, I am pushing help charts as well. So the complete contents of that help directory whatever I have here, this with the charts for database web and the API is pushed as part of my build artifact. So that goes along with your Docker images, help is one of the build artifact. So this gets pushed into the drop folder and then release looks at it. So if you look at the release pipeline v2 version, it would be picking everything up from, so this is coming from the drop location again. So I choose this help and then database API or web and then the specific chart. I do not need to specify the chart, I just need to specify the root folder itself. That is one of the problem I had initially when I was giving the chart name, there was some problem. So you do not have to go down to the level of chart, you just specify this level. So we went through building the pipeline, connecting all the services GitHub, Azure Resource Manager, Docker Hub, as well as Kubernetes, we built the complete CI build. Then we went through the Kubernetes deployment, Cube CTL apply, which was the version one of our pipeline with the static image tags without the image tag basically. And then to solve that problem with the static image tag, we introduce build ID and help. So help of it is quite an interesting project because as I said, it gives you the ability to version your releases, Kubernetes applications. It also gives you ability to share your charts. So if you have something which is really common, there are repositories for help charts like Elasticsearch, if you are using Prometheus, Grafana, you do not have to build all those help charts yourself. These are publicly available. You can just point your deployment to these existing chart and you can say I want to deploy this chart onto my cluster. So that is a very good feature and it also has features like updates and rollback. So if you want to rollback to another version of the release, previous one, you can do that which is not that easy if you are using vanilla Kubernetes, kubectl command line. So we went through Helm cluster, how to deploy it. So that is like the end of the demo and the talk for me. These are some of the links I have for you. Demo code, it is all in GitHub. I have given links to some example of how to deploy a .NET core application using AKS, the complete tutorial as it develops Helm and the state of develop report. If you are interested in Azure DevOps, there would be an event happening in early January which would cover all those five different broad areas that I showed initially. So we will have the announcement coming out soon. So do look out for that. It is a full day event and again free for the community. We did not have any test case because in my Docker file, I do not have any test. So if you look at my multi-stage Docker file, I just have the build definition. So not Docker compose, I need to go to the individual Docker files here in the source. So that is the web one. So we just have the restore and publish. We do not have a test phase. But if we had a test phase, again as part of that build, we could say run a test task. You can do it as part of the build. Because this Docker file is a definition for packaging the application. Whereas you want to do the build separately. I am not testing automated code. I am just doing the deployment. So this is a very simple pipeline which takes what is delivered or what is packaged as part of Docker image. You can and you should ideally build a pipeline which goes through a test. Only if the test is successful, then create the Docker image. Even after you create the Docker image, you should run like performance test or functional test, automated functional test. And when that is fine, you should go and deploy it to the target environment. In the build, we can specify the different test cases. Yes. That is something you can do as part of the build definition. So here you would add those tasks. So I am doing, sorry, in the builds, not in the release. So in the build definition, you can add all those tasks here. So if I go, I should have a test task and I can pick up one of these test tasks. If I do not have what I find here, I can look at Marketplace. So right now you are running the build within the Docker task. What is the idea to be able to do? And then the test and then running Docker just for the publish. Yes. Any other question? For the publish, sorry, the release pipeline, you are designing this for a multi-stage environment. I mean, I have multiple environments. I would use help and then I would, I would defer the image between the environments because I wanted to run a different tag, for example. How would I store as an environment-specific configuration that I would enter into help? One way would be, you create different values here. So in my help definition, I have the values file, right? So here you can overwrite this for environment. So you can have one for development, one for production, one for QA or UAT environment. Or the other way is, you do the modifications at the time of actually deploying it. So here I am just setting the build tag. You can pass multiple values here. So you can override as part of this stage. Yes, or you can create even variables. So as part of your build and release, you get these variables. So things like secrets, environment secrets and all, you can create them. One way could be to create them as your build and release pipeline variables. Other ways to use a vault like Azure vault or some other vault where you store all these secrets and during the build and release pipeline, you pick those variables and inject into your images. Yes. That is just a proxy. So for that, you have to go to my PowerShell and I have a script called Browse AKS which is actually a Azure CLI command. Azure AKS Browse, I give the resource name and the name of the cluster and it creates a proxy to the actual Kubernetes cluster. You had a question? This is possible to require an Azure DevOps iLi using some kind of server. To use Azure DevOps pipeline using CLI. Yes, there is a CLI support and even here there is in the tasks, you can run some of these things using CLI. So in the build or the release, there is support for AZ. So if I go to the task, I saw somewhere there is CLI. I haven't tried it but I saw a demo last year at the Microsoft by Donovan Brown and he was doing everything on CLI. So it is possible. Any other question? So if not, I will publish the slides at these two speaker deck and slide share. The previous ones are available here and the videos are available as well from the previous five sessions. So thank you. I hope it was useful. Thanks a lot for coming. I've got some stickers.