 So I'm Shashank, I'm a Program Manager from Microsoft. I work for the Azure DevOps team and more specifically in the space of containers and Kubernetes and more specifically in the space of release pipelines. So today, I'll be speaking to you guys about containerized apps and what this means in the context of DevOps. How do we structure CI pipelines and how we structure CD pipelines? A few more things I'll be speaking about is, what does traceability mean in the context of Kubernetes? The changes that we make in GitHub, how do we make sure that there is end-to-end traceability from source control till the point of Kubernetes clusters where you're running your services and production? The other thing that I'll be speaking about is deployment strategy. How do we use a candidate deployment strategy to make sure that the new application bits that we are deploying are either significantly better or at least as good as the current application that is already running inside our production clusters? So before all of that, I just want to take a quick show of fans in terms of figuring out how many of you guys over here already use containers in your production service? Okay. Awesome. How many of you guys, okay. I think my next question usually, I always ask how many of you guys use containers and the room would not have a good answer and I would go on asking, okay, how many of you guys are on different language stacks and then it would be a lift and shift thing, but considering that most of you guys are already on containers, so just more to the next point. So for those of you who are not on containers, so why is this container thing becoming a crazy? Why is everyone speaking about it since the last three, four years and why is Kubernetes speaking? I mean, why is everyone speaking about Kubernetes? So with respect to DevOps, the one of the biggest challenges that most of the businesses faces is that agility. How do I get my changes out as soon as possible out to production? Speaking of agility, how do I do this in such a way that all my changes are tested? How do I do this in such a way that they are trackable to whoever made those changes? If I want to roll back these changes, if something goes wrong in my production clusters, like how do I roll back to a state where I know that the system was in a stable, stable state? So this slide basically speaks about those challenges in terms of how do I deploy my changes in a secure and compliant manner, and the 72 percent number is basically an observation in the sense that most companies are trying to cut down the budget. In that context, how can DevOps help in wherein we are just fast-tracking the changes and removing the human intervention steps as much as possible over there. So the business needs rapid innovation. Think about a site like Amazon or a Flipkart. So for them every minute, let's say there is a critical bug in their production wherein a page is not loading or a product details page is not loading. Every minute over there is a massive loss in terms of revenue. So in that context, if I'm making a fix, how do I roll it out to production as soon as possible? So agility becomes a significant ask over there, and moving on. So what is the container? So before containers, we had this notion of virtual, like even now we have the notion of virtual machine, but virtual machines were the creators. So virtual machine basically the story that we were trying to tell was, hey, you don't have to worry about the hardware. You come to a hosted service like Azure and we take care of maintaining the hardware and you just have to worry about the VHD image that you're going to bring in, bake the VHD image and then use VMs as a unit of scaling. Containers took it a bit further. So now you are using a shared kernel. So you are just only worrying about your applications and the operating system part is being handled by us. So we'll take care of the patching the operating system. If there are any security fixes to be made, we will take care of that. So tomorrow there's a critical vulnerability found in Linux. We will take care of patching that and you just have to worry about the container specific portion of it. So one thing that we keep hearing often in the context of containers like correlated with containers is microservices. So there are different approaches that I'll discuss in terms of lift and shift in microservices, but the most popular one of which is refactoring. So what is a monolith and what is a microservice based architecture? In monolithic architecture, most of your applications are like, your application is running in a single process manner. And it's mostly associated with waterfall kind of a rollout mechanism wherein we test all the changes like over a month's period of time and then we deploy it at once. But in the microservices based architecture, you have multiple, multiple teams. They call it one PIZZA team or two PIZZA team, however you wanna call them. So your application is now broken down into a bunch of microservices which are correlated in manner, but are independent in the context of being able to deploy independently, being able to deploy sooner. And the idea is that for a microservices based architecture to work because now there are so many components, how do I structure different pipelines for each of these microservices so that first, I wanna test how that individual microservice performs in itself as a standalone unit, how it performs in terms of latency and how it performs in terms of making sure it adheres to all the security compliance tests. And then, okay, once this microservice is deployed, how does it fit in with the entire system? If there is a microservice B which is dependent on microservice A, I wanna test the dependency out and make sure that nothing is breaking. So it requires a refactoring in terms of, refactoring of your application in terms of code because now you're moving away from a model wherein all your changes were being deployed like probably in a month or like once a month or twice a month. And then you are testing the entire system at once in a monolithic based system, right? In a microservices based system, like you're deploying much more often, like take the context of again coming back to an e-commerce website that one minute made a huge revenue difference, right? So I wanna test that if I know that there is a problem in only one of those microservices, I can go and fix it and keep iterating over those fixes, right? So that's microservice part. Let me move on to the next slide. Yep, so containers. So what do most organizations do with respect to onboarding of containers? Most organizations, whenever they wanna onboard, they follow two approaches. The easiest of which and the majority of the cases that we observe is the lift and shift approach. The microservices based architecture that I just spoke about, it requires and like quite often it requires you to refactor your code to move away from that monolithic based application, right? But most organizations don't wanna do that at once. They don't have the bandwidth to do that at once. So what they do is that they basically take a lift and shift approach. They basically author a docker file and then take care of packaging the container in the right way and then basically drop that entire container onto the VM. You are getting some benefits in that context in the sense of portability. Now your application is portable. So what do I mean by portable, right? So containers, they run the same way on your laptop and they're in the same way on your production server. So if you know that this particular container is running in so-and-so fashion, it's supposed to be running in the same fashion on your production servers as well. So it's portable in that context. Even in lift and shift when you're not changing your structure of the code and not changing the architecture, but you're just porting it onto containers, you are availing that benefit. But microservices, so wherein, so if I go to microservices, right? Now I'm breaking it down into multiple, smaller, smaller chunks of services. So there, quite a lot of benefits you'll avail. So one of which is that your services are now smaller in size, individual services are smaller in size. So the size not only means that you can bin pack more containers onto the same VMs and have more or rather like have better utilization in terms of cost. But the other thing is that because now you have smaller, smaller microservices, you have a smaller surface area of vulnerabilities. For instance, if I know that a vulnerability was introduced with one particular microservice, it may not be impacting every other like part of your application. It may just be impacting one particular portion of it. The other part is polyglot services, right? So what do I mean by polyglot? For in the monolithic-based architecture, it's you'll find it quite often that your entire application is written in one language stuff. In microservice-based architecture, we define REST API as clear interface. REST API is all APIs based on GraphQL. And then as long as everyone is taking to that contract, we know that the microservice can be written in any language that the team prefers, that they find performance or that they are comfortable doing with. So my payment API can be written in Node.js. My front-end website can be written in some other language and my booking API can be written in Ruby. I'm just throwing out examples of that. But the idea is that you can have a polyglot-based environment over there, right? So in terms of agility, you can see the line over there. The more you move towards the right, the more agile you get. And we have seen this difference in some of our top customers in the form of Zaze. There is Hexagon. So these guys, when they have onboarded onto containers and when they have moved onto microservices, they are now able to deploy at a much faster pace. One specific example I'll give you is that of Xerox. So Xerox not only uses containers, they use Kubernetes as well. So before they moved onto containers and Kubernetes, they had one big monolithic-based application where it was a Java-backed application, Java application backed with Postgres SQL. And all this was running on a single VM node, right? And whenever they had to scale up, they would have to add more and more VMs. So that was their form of scaling. But that meant that whenever they had to scale up, that time that it would take for them to scale up was roughly around 24 hours. One, what they did was that they moved out the Nginx web portion part of it out of the main application. Only their Java part of the application formed the entire container. And for the Postgres SQL database that I spoke of, they started out using a managed service within Azure, right? So this combination meant that their main application, the Java-based application, whenever they made changes, their difference came from 24 hours to actually 10 minutes in terms of deployment. They could now deploy within a span of 10 minutes. Even I couldn't believe that number in the first place, but it did come down to 10 minutes and it was quite surprising, right? So after that, so now before I actually start demonstrating the core part of the talk that I was speaking of, traceability and deployment strategy, let me just give you guys a quick intro to Azure DevOps and the services that are there so that you guys know, you guys get a context in terms of what I'm gonna do for the rest of the talk, right? So Azure DevOps, so think of it this way. Well, where does Azure DevOps fit into this whole ecosystem? We have GitHub that Microsoft acquired and we have Azure or AWS or Google Cloud. You have Azure DevOps in between. You have Azure Pipelines in between. So any change you make over there, you can use our Pipelines to deploy to multi-cloud environment, right? So now, what are the other services that we have? So we had Azure repos, like even before the GitHub acquisition still continues to exist. It is very well-performant and it is suitable for large Git repos as well. And for project management, more specifically, work item management, we have Azure Board. Very recently, if I've added integration with GitHub, you can make certain changes in GitHub and just mention hash A, B, one, two, three, and it will close out the respective issue in terms of work item within Azure Boards, right? So it is basically project management that is peripheral to the main code changes that you're making. You are basically assigning work to the developer, getting it tracked, that's where Azure Board comes in, right? Azure Test Plans, you're basically authoring Test Plans and surfacing the results within our DevOps portal and making sure that the tests are run in an appropriate way and the tests are succeeding whenever you make those changes, right? And artifacts, think of NPM packages. You can author NPM package and you can maintain an NPM feed within Azure DevOps. So that's where Azure artifacts come in. And Pipelines is the core of Azure DevOps, right? So this is where CI CD is. So you have a change in GitHub. Now that change triggers of a CI pipeline, like build pipeline. Over there, you'll build the container, you'll push it out into a container registry. It is not just for containers, we even have hosted agents for Mac, we have hosted agent for Ubuntu and we have hosted agent for Windows. So you can have your iOS based applications built out of our pipelines as well, right? I'll take the questions after those things. Someone raised that, okay, cool. So that's Azure Pipelines and CD is when you've built out your artifacts. Let's say you have built your container and you have posted it out to the container registry. Now we want to deploy it. So how do you deploy it? You have a release pipeline. And in terms of deployment targets that we see today, the ones that I'll be speaking of is Kubernetes. So you have containers, right? Now you can either run the container on the web app for container and which is like suitable for single container based applications and you scale them out. Or you can run them on Azure computer, Azure container instances where they're again suitable for single container. But today, the one I'll be speaking about is the most popular one, Kubernetes. So where does Kubernetes come from? So containers are portable. They're small units of deployment artifacts that I just spoke of, right? But a system is made up of multiple such microservices, multiple such containers which are backing those microservices, backing those services within the cluster, right? So Kubernetes handles the orchestration of these containers. So in Kubernetes, you define a desired state of your system and you do this through certain manifest files. You create deployment objects. You create service objects and you apply them on the cluster. I'll give a demo of all of this, don't worry. And then when you apply them onto the cluster, Kubernetes takes care of making sure that these objects always reach a stable state. And if a workload keeps failing, it will try to bring it back into the stable state, right? So that's Kubernetes. So now let me just go on here. So one thing that I want to note is that before like we had these multiple services, we used to call us as VSTs Visual Studio Steam services and it was just one stack. Now we have multiple services and the reason is we have actually split up our Azure DevOps offering into multiple individual offerings, right? So if you just want to use pipelines, it's perfectly possible. If you just want to use Azure boards along with GitHub and any other CI pipeline, the CI CD pipeline that you have, that's possible as well. So it works perfectly fine. Like it works the best when you use all of them together, but you are free to choose and use a single or like certain permutation of them, right? So common software delivery challenges. So this I've already spoken about to some extent. So we just want to increase the, we want to work on the low deployment frequency. If like deployment changes are going in every month or so every six months, so that's a problem. We want to increase that pace, right? That's essentially the same thing for lead time as well. For failure rate, if there are certain bugs in production, it impacts your service, it impacts your business. So we want to catch those tests as early as possible and as towards the left as possible and towards the left timing, towards the CI phase of it, rather than actually you deploying to production and then finding out that something broke, right? So where does DevOps, how does DevOps solve this problem? It increases your deployment frequency. It reduces your lower change rate. It again increases like this all ultimately leads to increased revenue. This, these figures, they were pulled from this particular research article and this is what we have for all the standard slides. So this may not be that interesting. Let's get into the demo, right? The demo is where the fun stuff starts. So wait, wait, I have just one more slide before the demo. So Azure Pipelines. So what are the benefits? So you basically have, we have full flight support for containers and combinators. This is an area where we are doubling down on and we have like, we have integration with open source. So you can use this with GitHub. You can use this with our own Azure repos as well. The extensibility part is that whatever tasks that I'm gonna show today, the tasks that run in our pipeline, most of the tasks like written by us, but at the same time, all these tasks are open source. So you can just go to the repo, see how we are writing our tasks. There's an Azure task, Azure Pipeline task library as well, wherein you can write your own tasks and have them work in within your pipeline. It need not be the case that we always write the tasks and you use them, but you can yourself write the task using a framework and make them work in your pipeline as well. So now let's get into the demo part. So the first demo that I wanna show is, what's the easiest way to set up a CI CD pipeline? So when I'm setting up a CI CD pipeline for something related to containers and then trying to deploy it onto a Kubernetes cluster, the main problem that we keep hearing is, hey, there are too many concepts that are going on over here. You have to set up secret management between your container registry and Kubernetes so that Kubernetes knows how to pull from your container registry. And you have to set these secrets up inside your CI CD pipeline. Even before all of this, you have to provision a container registry, you have to provision a Kubernetes cluster and then you have to set up roll binding with your service account inside the cluster so that let's say a rogue agent will not be able to make certain malicious changes across the namespaces. So security has to be set up. So all this is like quite a complex process. So one thing that we did last year was introduce a resource within Azure called as DevOps project. So what this does is that you can go to DevOps project. This is simple like five minute starter template. So you can choose a language stack that you want. You can choose what type of deployment target that you want and you can just click on create after you have basically chosen the subscription which resource group, where your resources will lie. And we will take care of a few things. First thing, we'll set up source control for you. I just spoke about Azure repos. We'll set up source control in Azure repos so that all your initial starter template code is checked into Azure repos. That's the first bit. Second bit is that we will actually create an Azure DevOps account for you and a project for you inside that. And we will set up the CI CD pipeline completely for you. And what that means is that your CI pipeline is already set up with the container build task. Your CD pipeline is already set up with the Helm deployment tasks. The third thing is we'll actually go ahead and provision the Kubernetes cluster itself. So the pipeline knows where it is getting deployed, where the changes are to be deployed to. So let me just do that. So let me go to Node.js template and then express.js template and then Kubernetes servers and then choose. Oh, sorry. Wait, wait. The, I forgot to exit out of the slideshow. This always keeps happening to me at all the talks. Sorry for this, like if this happens again, just remind me, right? In fact, let me not go into the slideshow mode. Yeah. So this is how I got to this. So you can, you just need to go to Azure portal, click on creator resource, then click on DevOps project, then choose any of the language templates that you want. I'll choose Node.js for today's demonstration. You can bring your own code as well, by the way. And then choose express.js, then Kubernetes, we have support for, the web for containers, we have support for Kubernetes and we have support for the web. So for this, I'll use Kubernetes service and I'll give a dummy project. One, two, three. Let me choose my organization. Sorry. Zoom in. Is it visible now? Okay. So I just chose a project name within the organization. I chose which organization to deploy to. And then I basically choose create new. And this takes care of bringing up all the resources that I spoke about, right? This was the entirety of it. So now it will take five to 10 minutes to get this, like all these CICD pipelines set up to get the source control set up and to get the Kubernetes cluster set up. So meanwhile, I'll demonstrate traceability and deployment strategy, right? So let me go over here. So the main part of the demo, right? So this is the one that I was speaking about. So over here in this bootcamp app, bootcamp-demo, so we build a Glocker container, right? So I have just two files. One is an app.py file, it's a Python based file in which I have a dummy web server running. So whenever someone hits the root path, I'll just say, hey, hello world and I'll give a 200 status code. And the other, so I have a success rate defined. So 20% as per the current configuration of the, oh, this is, I need to zoom again. Okay. This is really, okay. So 20% of the time this is gonna give me hello world and 200 status code. The other 80% of the time is gonna give me bad code and internet server of 500. But this is me returning the strings. I'm not like, the server is not returning you the strings, right? The server is not returning you the response code. Over here, the other thing that I wanna demonstrate is that just take a note of a couple of lines. When I say take a note, just observe those lines, right? So one is this line where I'm importing Prometheus client and over here, basically I'm incrementing that Prometheus counter in these two lines, line number 14 and line number 17. So before I will get into what is, like why I'm doing this, so what is Prometheus? So you deploy your changes to production. It's almost never the case that you don't have monitoring setup inside your production cluster. You'll be wanting to measure your performance. You'll be wanting to measure the health of your applications as well. So Prometheus is a metrics provider which helps us in doing that. So over here, basically I'm saying that whenever a good response comes in, increment the good counter. Whenever a bad response comes, whenever I'm like giving a bad response out, increment the bad counter, right? So this was my entirety of the application. I kept it a really, really simple application for the demo. The other thing that we have is a Docker file. So this Docker file, it's, let me zoom in again. Yeah, so this Docker file, so I'm using the Python 3 slim as the base image and all this is doing is that it is installing the dependent packages in line number six and I'm copying over my app.py file on to the container, right? So that like that gets executed when the container boots up. So this was my entirety of the Docker file. So now to set trigger of this whole pipeline, right? So let me just go ahead and make one change. So for this demonstration, so this success rate I've just hard coded the value, right? So for, if I change that success rate, it should basically simulate, okay, higher success rate for the application. So let me just change this value. So over here, let me change this value on to, let's say 60, right? So this is supposed to mean that now my performance, now my application is performing much better because it is a higher success rate. So let me go ahead and commit that. So I've made the commit. So now if I go to my build pipeline, it should trigger off any moment now on a second, wait a second. Let the meanwhile, when it gets, by the time it gets triggered off, let me just show you what's happening over here. Why is this not getting triggered off? Let me commit. Anyways, let me just manually trigger off this pipeline, right? So the demo God has finally struck me. So let me just manually trigger off this pipeline. Yeah, yeah. So I've done a manual build. So over here, I'm picking up the changes that were already existing in the pipeline. So I'm gonna do a get check out and I'm gonna build the changes in. So let's just wait for the agent to get queued. So meanwhile, I'll just give you guys a demo of, yeah, the agent is queued, wait a second. So a couple of things in terms of how the pipeline is structured, right? So if you take a look at it, so I have a YAML-based pipeline. So I'm doing three things over here. I'm logging into a container registry. This portion is referring to logging into a container registry. Oh, okay, hold on. Okay, so in this bit, I'm logging into a container registry, right? So the registry that I'm using for this purpose is Docker Hub. So I'm logging into a Docker Hub container registry using a service connection. And then in this bit, I'm building my container. So I'm specifying where my Docker file lies within the repo by this particular line and what image name needs to be used. So this is the image name that will be used because I've set up a variable over here. And in this bit, I'm pushing the built container image onto a container registry. And where is it gonna push it to? It is gonna push it to the Docker Hub service point that I specified within my login command, right? So this is my entire CI pipeline. So if I just go back to my CI pipeline and see the result, it should have succeeded or hold on, it's just the last bit is going on. So this is my build pipeline. And let me just refresh. Yep, it's finalizing the job. So this is my build pipeline, right? So now the container has been built and it's been pushed to a container registry. Now what this next trigger serve is my release. So now we are gonna deploy this change onto a Kubernetes cluster. So if you go to a release pipeline and click on edit, so you will see a couple of things over here. So let me just walk you guys through how the pipeline is structured. So in my first stage, I'm deploying a canary. In my production cluster, I already have a version of the service running. So in my first stage, I'll deploy a canary and a baseline right next to my production cluster. Why do I do this? Because if I deploy a baseline and canary, they are ideal for candidates for comparison because they both have the same life cycle. They both have the same size. If I try to compare a canary against a running production workload, that will not be an ideal comparison to do, right? So if you just take a look at how this deploy canary structure, it's a simple Kubernetes manifest task. So I specified the strategy as canary and I specified the percentage as 25. So my original deployment object had four replicas. So now this is gonna create a canary workload with just one replica, right? So now I'm specifying where my manifest files are to be picked up from. So within my repo, if you go back over here, so this is where all my Kubernetes manifest files are. So these manifest files basically contain information in terms of what the deployment looks like, right? So this contains information in terms of what the deployment object needs to look like within Kubernetes, right? So now if I go over here, so once again, let me just go back in previous, next release, sorry, yeah. So the next phase, let me go over here. After I've deployed the canary, right? So I have a manual intervention setup, but before that, now let me show you Prometheus. This is where Prometheus and Grafana come in. So if you take a look at my cluster, this is gonna be my cluster, and I'm gonna access it through one second, hold on. Let me increase the font. Let me know when you guys are able to see. This is sufficient, right? Okay, let me bring it up. So if I do cube CTL get deployments, apologies, the internet is a bit slow, so it will take a while for the research to come back. Yeah, so now I see three things over here. So bootcamp-demo was this table production workload that was already running before this release started. So now I also have bootcamp-demo-baseline and hyphen canary. So if you take a look at the number of replicas, bootcamp-demo was running four replicas, and bootcamp-demo-baseline and hyphen canary are running one replicas each. So now I'm going to compare these two. So what is the version of the application that each of them are running on? So the baseline is running the same image, is basically pulling the same image as the bootcamp-demo, the stable version. The canary is pulling the new image. The new image that I just built and pushed it to the container industry, canary is pulling that, right? So how do I compare them now? I go to Grafana. So Grafana is a visualization layer sitting on top of Prometheus. So Prometheus is how I actually scrape my metrics from the workloads, I set up service monitors over there. So in Grafana, I have scraping, sorry, in Grafana I basically, oh, okay, again I have to visualize this whole lot. So in Grafana, I basically visualize my changes, right? So I have two dashboards over here. I'll just give you a quick explanation of what they are. This dashboard is basically showing me all the metrics corresponding to bootcamp-demo, bootcamp-baseline and bootcamp-canary, right? So over here in this one, it just shows me baseline and canary. By the way, what is this red region that you are seeing? So I've actually set up a service hook within Azure DevOps so that whenever you deploy a canary, come and annotate my dashboards on Grafana so that I know when to start comparing. As a user, if I come to Grafana dashboard, I should know when I need to start comparing the baseline and the canary. So it has deployed the baseline and the canary over here. It takes a couple of seconds for it to warm up for the baseline image and for the canary image to be pulled by the cluster from the container registry. And after that, once your container is created, now the metrics have started surfacing, right? So now let's see what basin and canary is doing, right? So this is corresponding to good. So if you remember, I increase the success rate. I went from 20 to 60, right? So this line corresponds to custom state is good. And if you take a look over here for the yellow line, it is bootcamp-demo-canary. And the green line, it is custom state is good. And it corresponds to bootcamp-demo-baseline. So essentially what am I trying to show over here? Build your changes, push it out to a container registry or push it out to any artifact store. Then have your release pipeline, then deploy your bids onto the cluster or wherever you have your environment with beat app service or beat web app for container. And then basically have a canary run against your production itself and create a baseline as well if you want an even more ideal comparison. So when you have a basin and you have a canary against each other and you perform this comparison, then you're basically safeguarding against any degradations that may happen in your application quality. So you are actually checking for a better state of the application with every single chain. So this is continuous integration and continuous deployment and it's truest sense, right? So now let me go back to the demo, right? So now I have a manual intervention setup in the task. So let me show you the release, the second stage of the pipeline. So a manual intervention setup at the first phase. So to say that, hey, I'm gonna pause this pipeline when I deploy a basin and canary so that a human can see what a basin and canary, how they're performing against each other and then make a call whether to resume or reject the deployment, right? Now my second task is gonna, if I succeed the manual intervention task, right? Only then this will execute the promotion of canary to stable. So this is going to deploy whatever canary bids are there onto my stable workload. So if I say, hey, okay, the success rate looks better now with the canary changes than it was before, then I'm gonna resume my deployment. So then the application with a new success rate, that particular container and that particular set of manifest might get deployed to the cluster and now that becomes a stable. So the next pipeline run, the previous canary becomes a new stable, right? And by extension, it becomes a new basin. So what is this cleanup phase? Regardless of whether I promote or reject my canary changes, I always have to clean up hyphen baseline hyphen canary because they were just transient workloads that I created, right? So I clean those up over here. So now let me go back to Grafana dashboard. I'm satisfied with the promotion of the changes, right? So over here, I'll go over here and I'll just promote once. Let me just go over here. Let me resume this. If I go back to logs. So while this executes, while the promotion happens, one thing I wanna show is that I spoke about traceability, right? How do I enforce traceability? One, in the CICD pipeline itself, you have end-to-end traceability in terms of, if you go back to the build pipeline, you'll be able to see once again. Yep, you'll be able to see who triggered this particular build. If it's a manual build, you'll say it's a manual build and which branch did this change come from? And if you click further, you'll be able to drill down on which particular commit as well. And in just going into the logs, the other thing that we do is that because we are using our Docker task, we are adding labels over here. These labels are labels that are added by Azure Pipelines. They basically say which deposited this originate from who initiated this change, which organization, which DevOps organization is this, which DevOps project is this, which pipeline is this, pipeline name, pipeline run ID. We impose all these details onto the container image itself. The other place where we impose traceability information, if you just say kubectl, describe deployment, bootcamp, Python demo, this was the deployment object that we keep updating, right? So if you do describe it once again, I need to zoom out now, yeah. So if you see the annotations, these annotations, these were all added on by Azure Pipelines, saying that, hey, this is the organization that ran the particular change. This is the pipeline that ran this change and this is the execution that ran this change. So even if someone is not using the pipeline and someone is directly using, has access to the cluster, let's say it's a cluster admin or it's an ops engineer, right? So tomorrow if he just wants to go to, let's say he's paged for on-call duties at two o'clock in the night and he just has to figure out what broke. So if he just goes to the production cluster and directly does this same command kubectl, describe the deployment and the name of the deployment, he knows where these changes came from. Who was the last person who deployed these changes? And then he can trace back, go back to the pipeline, then see where this release came from, then go back to the pipeline further, then see where this build came from, then go back to the source control, I'm running out of space here, but yeah, then go back to source control and figure out who made that change, right? So that way you can trace the changes back. So let's just go to the release pipeline, yep. So the promotion is successful, the cleanup of the baseline and candidate are successful. So let me just verify that in my pipeline. So kubectl, get deployments, right? So now I have just bootcamp iPhone demo. So this is running my canary-based application and the last thing that I wanna show is basically complete the demo that I had initially kicked off, right? So if you guys remember, we had created a DevOps project resource, specified which account and what is the name of the project I want and I specified Node.js and Express.js based template and I wanted to deploy to a Kubernetes cluster. That was what I had done, right? So now let me just go over here, wait, let me actually go over here, yep. So this gives me a view of the entire set of changes that were deployed. So now let's start off with a repository, yep. So if you have a link over here, you can just click on this link and it'll show you where all your coaching is like. So now this is an Azure repo that we created. So this is where your application lies. So this is the Express.js based application, standard Express.js based application. You have your app.js file, Dockerfile to containerize that particular application. This is where my health charts are existing. So this is the deployment object similar to what I just showed with bootcamp pipeline demo. We create all of these for you in those five minutes like as a starter template. And so this was my source code, right? So where is my build pipeline? The build pipeline that I gave a demo of like manually, like how I would have constructed it. You don't have to do that. So you can start off with fully provisioned pipeline. So this is one such build pipeline wherein we are basically building the image and pushing the image onto the cluster. And the Helm packages that you, Helm charts that you had in your repo where even packaging goes as well inside this particular command. So let me just give you a quick demo of the build pipeline. So this is my build pipeline that I've set up. So you have all these tasks already set up for you in the build pipeline, right? And let me show the last bits. This is the release bits. So my release bits, if I go, I'll be able to see, hey, we have already set up one particular stage with all these tasks already set up. It's adding the image pool secrets. It's setting up tiller inside your cluster and it's deploying the Helm application over here. So where is this deploying tool? If I go back and if I click on Kubernetes cluster, so over here, one second, hold on. Where is my Kubernetes cluster? I'm in project one, let me just search it. So over here, if you see, we have already created a Kubernetes resource. You can go and scale it to your needs as, I mean, how do you see fit? And activity log. So we just created a cluster and then even deployed it, right? So if you choose monitoring insight as a part of the DevOps project demo, you will be able to enable monitoring for you. But if you wanna connect to a cluster, you just have to run the command az aks get credentials and then the name of your cluster and the name of your resource group, it will merge the credentials back into the Kubernetes, like my terminal, and then you can just start running kubectl get deployments and that's your hello world. And that's how you get set up with the whole thing. So in a span of five minutes, you created CSCD, we created Azure repo, we created CSCD, we set it up for you with all the tasks and then we even created a Kubernetes cluster. The whole flow is set up for you. This, along with the traceability and deployment strategies, is something that we are really, really proud of and we are focused working on extremely hard every other sprint. The one more thing that I wanna speak about is that, yeah, let me just go back to the slide deck one second. We are working on the resource with you within the Azure Pipelines itself. So whenever you make these changes, you don't have to go to like four different places and check these changes. So as soon as you build an image, is this visible? Oh, no, no, no, it's visible slightly better. Yep, but I'll just give you, describe what it is, right? So this is an image detail view. Here we have metadata, basic metadata in terms of where we push that image out. What is the size of the image? What is the labels that you have added on? So this is the summary information. We give you a layer-wise breakup of, okay, each line within the Docker file, how much size did it add on onto your container layers? You can optimize your container layer. Future, what more we wanna work on is we wanna add two, three tabs over here. What tests did you run as a part of container structure test? What vulnerabilities were found within the container? So if any vulnerabilities found, you can just click over here and figure out what are the vulnerabilities and then go make the necessary changes to fix that in your source code, right? So this image details, the other things that we're working on is, okay, I just spoke about deployments, but I'm doing it in terminal and Azure portal is like another like pit stop, right? You have to go there to check those things. But we are trying to come up with that resource view within the Azure pipelines itself. So this is the deployment object that we created. This is another deployment object we created. And this is the workload view. And if you click on it, we can even show you information about how the replicas is scanned and you can take the decisions accordingly. And the last one is the port details view. If a container is failing, why is the container failing? Which image version is it running on? And if you take a notice over here, so we are adding the traceability information over here as well. Which pipeline ran it? Which job ran it? Which particular commit ran it, right? So that was my demo. I think I've already done my candidate deployment demo. That's it from my end, guys. So if you guys have any questions, feel free to shoot those questions. And if it's not possible today, do make a note of this. That's my Twitter handle at the rate sb. And you can tweet me anytime. My alias, Microsoft alias is shasb. S-H-A-S-B at theratemicrosoft.com. It's, let me just type that out if you guys want once. Oh, someone taking photos? My Twitter handle is at the rate capital S, capital B, A-R-S-I-N, S-B-S-N. And my Microsoft alias, let me just duplicate this one second over. Yep, and that's my Microsoft alias. So in case you guys have any questions about whatever we just demoed, how you, like if you guys want to set up, if you guys want any help in setting up your pipelines, or for that matter, even your Kubernetes cluster or your source control, please feel free to reach out to us and we will certainly like that. Right? We wait for questions. So we are actually out of time. Okay, sure. But we have a break right now so we can catch up. That was great. In that time as well. Thank you very much. The slides anyway will be available. So if you can update this and then share it with us, everyone can get access to the slides. Yeah, I'll share it. And they can, that can be reshared. Exactly. Yeah, I'll share it with them. Yeah, okay. Cool. Thank you. ConPengin, so the same proposal. There's a link to the slides. When the videos will be uploaded, that will also be linked from there itself. So it's on one place for the proposal slides and video. Yes. Cool. We can take questions. There is break anyway. So we can catch up during the break. Sure, makes sense. Thanks.