 It's not news that many organizations are going with a multi-cloud approach to minimize risks as well as optimize costs. However, doing that brings with it its own complexity of learning multiple tools to manage those environments as well as multiple tools to deploy to those environments. In this session, Sameer Akhub shares his experience on why customers are adopting a multi-cubinity strategy and demonstrates how GitLab can help you have a consistent approach even when you're deploying to multiple cubinities environments, all without switching out of the GitLab console. Sameer is a certified cubinities and AWS professional and has experience working with a variety of customers in APAC. Remember that all of our speakers are available on chat, so feel free to drop your questions and comments and interact with the speakers and one another at any point of time during the event. Over to you, Sameer. Hi, everyone, and thank you for joining me today in this session where we will go through how we can deploy and manage against multiple cubinities clusters. We will use Amazon, EKS, VMware, TKJI and Google GKE clusters in our session and demo today through one GitLab instance. To start with, let me introduce myself. My name is Sameer Akhub. I'm a solution architect with GitLab based in Melbourne, Australia. These are my contacts. Please feel free to add me after the session if you have any questions. I'm always happy to add people to my network. So basically, when we discuss about multiple cubinities cluster, I believe the question has moved from over the last few years from why we have three cubinities into having multiple cubinities or multi-cubinities cluster strategy. And if I will, there are so many reasons when I did my research for this slide. But I believe the three main points here are having deployment flexibility from applications point of view. I simply want to be able to deploy my cloud-native application to the best suited cubinities cluster available in the market without worrying about the operation and the automation tools and they should be able to support me in that cluster. So that's why one reason. The other one is the top of for most of the CIOs and CEOs and managers basically run my applications in the cloud-native applications in the most cost-effective way. Which means I want out of the available cubinities clusters available in the market, I want the flexibility, I want to be able to pick the most cost-effective option and host my application there. And I definitely don't want to be limited by technical capabilities due to the automation and deployment tools. The third one is for the operators and security experts. Basically, wherever I deploy my application as a developer, I expect that any automation and deployment and divsecops tool or engine should be able to run just next to where the application is deployed. Which means that I want to minimize any public network traverse between the application that's deployed in one of the cloud offering and any endpoint or applications or the source code if the source code is hosted on-prem. I want all the whole deployment, I want the whole provisioning of my cloud-native applications to happen within an engine, an automation engine that sits next to the actual chosen cloud platform. So these are the three main, I believe the three main multi-cubinitis strategy reasons or why we should have a multi-cubinitis strategy. Deployment flexibility for the developers, cost-effective and definitely for security experts and operators being able to minimize any service attack and being able to distribute my deployment and my automation across multiple cubinitis clusters. So when we talk about cubinitis here, we are a multi-cubinitis cluster, it's not only about just pushing the application to these different clusters, it's also about the whole operation life cycle which starts from provisioning these clusters and then also monitoring and digesting the loops from these different clusters so that I can, as an operator, I can use the same platform, the same GitLab platform to take actions against these clusters in terms of scalability scale up, scale down, increase resources, attach more or attach more resources. And not only that, I want to be able to automate this operation life cycle based on the digested matrices, logs and errors from the deployed applications. Basically, the same concepts we apply in the DevOps life cycle where we develop, we deploy, we monitor, we enhance the application, we want to apply the same concepts into managing and controlling multiple cubinitis cluster. I provision these clusters, I monitor them, I digest the logs and matrices from these clusters. I automate actions based on these inputs and then reflect them to the multiple cubinitis clusters in a way that would minimize any disruption to the running or deployed applications. Now, in GitLab, we have helped so many customers worldwide to adopt the deployment strategy or the provisioning strategy for their applications in one single pipeline to push these applications to multiple cubinitis clusters based on their preferences. What I mean here, some customers I've worked with have decided to do an on-cloud cubinitis using Amazon eCast to do the building for the applications. And then doing the testing using another cubinitis cluster for the sake of example here, VMware, TKGI. And then once this build and test phases are done, we are pushing the application to a third cubinitis cluster. Remember, all within the same pipeline into Google cubinitis engine. And we will see that live in the demo in a second. Before I move to the second, I want to stress on something here. It's not only about pushing the application, it's also about delivering these applications with quality. I mean, the engine should not only allow me to push the application to the cubinitis clusters. We are talking here about cloud native applications. So capabilities like canary deployment or blue-green deployment, staged deployment and automation for failover between different platforms should be in the core of the platform. And this is what GitLab enables customers to do. In one platform, I can do the staging, then I can do the canary deployment to my production and then I can select how much I want of the production workload to be diverted into this canary deployment instance before I say, okay, all good, the canary deployment is done. Let's go and deploy that to the production environment, but let's do that in a staged way. So let's go for the demo. In this demo, we will see how we can use multiple cubinitis clusters in one GitLab project and actually in one GitLab pipeline and how we can deploy our application as it's progressing through the pipeline from the review, staging and production to each of these different cubinitis clusters. Here I have a cubinitis GitLab project where I have added and defined the three cubinitis clusters AWS, EKS, VMware, TKJI and Google GKE. And just to see how we can define them, it's as simple as you add a new cluster and you define the connection target for that cluster, including the certificates and the service account. And as you see here for the VMware TKJI, I've assigned a base domain for all the applications that will be deployed there to be TKJI have in cluster.sumgitlab.com. Same goes for the EKS, where I have assigned of course a different base domain EKS cluster and the GKE as well a different third one. So we will use these domains to see how our applications can be accessed or deployed and will be accessed on these clusters. So if we go under the pipeline here, we see that we can run a pipeline against the master branch. Let's go and trigger it and see how it goes. So as you see here, this is a master pipeline on the master branch where it will build this branch, then do some kind of container testing. Of course in GKE, you can add all kinds of different vulnerabilities testing. And then this is the interesting part for our demo. It will deploy an instance of our application to the review staging stage so that it make it ready for the desk to run on that instance to do dynamic application security testing before it progress and deploy the application to the staging and then to the production environment as in Canary and roll out stage to roll out of the application. So for the sake of time, let's just until it pass these three stages, let's review a previously deployed pipeline here where it has already progressed through the build test and the review stage here. So if we click on this review stage, you'll see that it has deployed the pipeline or this application to the TKGI Kubernetes cluster. Actually either we can click here or actually in GitLab they're going to the environments option. You see, this is our deployed application and it is deployed to the review stage and all that I can do is come here and click on the open and this is a live deployed instance of my application. And as we agree, this is the best domain for the application and it is deployed to the TKGI cluster. So it is, by the way, the whole pipeline and we are using in this demo is built using GitLab auto-devops which is basically best practices based pipeline for deploying application or applications to the Kubernetes clusters including the whole stages lifecycle for DevSecOps. Basically the main purpose is to reduce the complexity and speed up the delivery cycle for cloud native applications. Cool, so if we go back to the pipelines here, we see hopefully our initial pipeline is progressing. Yep, so it's now it's progressing through the review. So if we go again to the environments, we should start seeing the environment for the new pipeline. So here I see that from the previous review, eight instances have been deployed. These red, green instances are basically the, sorry, green squares are basically the deployed pods. If we go in the options here, you see that I can go for monitoring, I can go for even a terminal which will take me to an indirect SSH into one of the pods for the deployed application if I need to check on this application. So if we go back into environments here, let's go to our pipeline and see how it is progressing. So it is deploying against the review stage here. Okay, let's go back and let me show you something. So going into the environments again, okay, here we go. The application, the production-based, sorry, the master-based branch pipeline has deployed, has provisioned for us a review stage here. And this is coming, as you can see here, it is 20 seconds, 20, 25 seconds ago. And this is coming from the pipeline that was deployed that we triggered at the beginning of this demo coming from here. You see, it is now running the desk. Let's go check on this job. Okay, cool. So the other thing I wanted to mention as well is going through these options, you can view live environment, which we saw. You can also click on this one, which will take you into the monitoring dashboard for the underlying Kubernetes cluster assigned to that environment, which means not only I can follow up on the status of my deployed application, but also I can follow up on the status of my Kubernetes cluster underlying that application. As we agreed in the presentation, it's not about only deploying the application, but also be able to gather matrices on these deployed applications. You see here, there is an alert which I defined before. Basically, you can define, if we go here, you can define alerts on free show based on any of these matrices. And these alerts can actually run a URL or a webhook, trigger a webhook to take actions based on these three shows. Basically, what I'm saying here, if the total cores are more than 24, then please trigger for me. And in my instance, I'm triggering actually a pipeline for a totally different project. And the actual, the purpose of that pipeline is to scale my application. So this is a web URL, a webhook URL for triggering a pipeline in that different application. And actually, if I take you there, this is the other application, a different tab. And you can see here is, here are the pipelines. Right, so let's go back here and check on again on the environments. It will load, let's check on our pipeline. So now the pipeline has moved from the review into the staging. As you remember, if you still remember, under the keeper net is we have assigned the EKS to the staging environment. So now if I go under the environments, here we go. This is the environment staging, which is mapped to the EKS. So if I now click on the open live environment for this application, yes, the base domain is the EKS base domain, the EKS cluster base domain. You see the first instance, then the one, it was TKGI. The second one is a base domain for the EKS. And again, same story. I can, okay, my application is to the staging environment, but how the staging environment is doing, I can go for monitoring for my staging environment. And this is again, this is another, the monitoring dashboards for that environment, where as we agreed, where I can go and basically define, if I take the URL for my webhook here, this, and I can go here. The current, for example, the current cores are 15.63. Let's say, if it is alerts, if it is just for the sake of the demo, if it's more than 14, then please trigger for me this pipeline here, which will scale the EKS, right? And this pipeline, as we agreed, is actually a pipeline here, right? Which will scale the EKS cluster, this one. It will be triggered to scale the EKS cluster with just one job, right? So now my application is deployed. The other thing is the logs. Actually, I can get logging or the generated logs per EKS environment or bare Kubernetes environment. In my case, I have the staging and I also previously had the review. So I can follow up on this application status. I can follow up on the matrices of thresholds and the infrastructure status for my Kubernetes clusters across. And I can gather the logs. I can go into open or access my pods through the terminal. Also, now the last step here is if we go to our pipeline here, you see that it's now ready to start provision my application for Canary deployment. So if I click on this one, what it will be doing is it will start, it will provision instances for my application into the production defined Kubernetes environment. As you remember under the Kubernetes, we assign, we map that to the Google GKE. So now I will start having pods deployed there. So either I go here or simply again, operations environments and I should soon see another environment added to the list. This is the review environment. This is the staging environment and soon I should get the Canary or the production environment added to the list. Once that job is progressing. See that in a second. Here we go. It's loading. Yep, and that's added. So if we go now to our environments and here we go. We have something deployed in the production under the Canary job. And you see here these orange circles are basically the Canary instances deployed in the production environment, right? And from within GitLab, the same page, I can now start diverting gradually workload from real life workload from the production environment into these Canary deployed instances. So I can say, okay, I want 50% changing duration. So the idea here is I can start diverting more workload to the Canary deployment. By the way, under the hold, what's happening here, it's using the engine X controller to using annotations on the engine X controller to divert the workload to the Canary deployed ports from the production workload. And I can control that. So basically it's a kind of doing some sort of blue-green deployment where I still have the production environment, but I started to divert and send more workload to the Canary deployment. Once I'm happy with that, the last step would be this rollout. This rollout job here is coming actually from, if we go back to our pipeline, this is coming from this part here. These are the production rollout. So if I go into my environments, I can start rolling out, for example, 50% of my own code, please go ahead and roll it out to this environment and it will start deploying my application gradually to the production environment. So just to conclude here, in this demo, we saw how we can have one GitLab project with multiple, in our case, three different Kubernetes clusters defined to that project. We see, we saw how each of these Kubernetes clusters can be used for one of the stages during the DevSecOps pipeline, the review, staging and production. We saw how we can monitor and actually define actionable alerts on these Kubernetes clusters under the matrices. We saw how we can even gather the logs for each of these environments. And also we saw how we can GitLab can enable the Canary deployment where partially our number of Instapods can be deployed into the production environment and then you can control how the production workload can start out, what's the percentage of the production workload that should be diverted or sent to these Canary pods. And at the end, we saw how you can, also from the same screen, do a staged rollout from 10% to 25% until you reach to the 100%. Overall, by the end of the day, we will have a safe deployment to these Kubernetes clusters with a full end-to-end view on what's happening under the hood. This concludes my demo for today. I hope it was helpful for you. Thank you very much. I'm ready for any questions. Thank you very much.