 In my introduction to day two earlier, I mentioned that today we would focus on the nuts and bolts of how to deliver on the promise of a DevOps platform. Well, I am excited to deliver you exactly that with our next talk. So one of the benefits of a DevOps platform is simplicity. And in our next session, Amina Mansour, from our sponsor Google, will walk you through the details of how to create CD pipelines in GitLab that can deploy to your Anthos clusters without having to worry about many of the typical authentication and connectivity hurdles you might expect. Let's see how it works. Hi, everyone. My name is Amina Mansour and I'm a Solutions Architect at Google Cloud. Today I will be talking to you about deploying to your Anthos clusters anywhere with GitLab and Connect Gateway. I'll start with a brief overview of Anthos for those of you that may not be familiar with it. Then I'll introduce you to Connect Gateway and how it enables you to deploy to your clusters no matter where they are running. I will, of course, end this with a demo of a CD pipeline on GitLab that leverages Connect Gateway to deploy to these clusters. Let's start with what is Anthos? To quote our director of outbound PM, Richard Sirota, Anthos is a managed platform that extends Google Cloud services and engineering practices to your environments so you can modernize apps faster and establish operational consistency across them. Now that's really great, but what does it actually mean? So let's break up the definition into smaller sections. The first one being managed platform. It's a managed platform, meaning Google provides the updates, manages the stack, delivers things such as OS updates, Kubernetes components, and the platform itself is mostly self-operational. The real value of Anthos comes from the fact that it's not just Kubernetes. Yes, Kubernetes is at its core, but it incorporates different Google Cloud services and engineering practices. It's made up of open-source software that we've created like Istio, Knative, Tecton and others, and it also includes some practices like Site Reliability Engineering or SRE, allowing you to do things like create SLOs. We've codified all of these practices into the platform to make it simpler for you. Turning all of that into one platform that you can put wherever you want. You get to choose where you want to run, whether in one environment or multiple, whether on-premises or on other clouds or on GCP, and Anthos is there to support you. Giving you a single experience for your developers and operators, regardless where that is. That really enables you to modernize faster. There is no new stack to learn and no ramp up time on every provider and new environment. You can build better systems and get greater visibility. Really modernization with Anthos and Google Cloud is not just some toolkits, but it's things like technology to help you migrate from VMs to containers or migrate from Cloud Foundry to Kubernetes and Anthos, or even Cloud Code Integration that will help your devs get things done faster. So it's really an active approach and partnership. This would all fall apart if it's a lot more work. Yes, of course, there is some additional work. But if we think that the future is a scale problem, then how do we make sure that this is not a linear scaling of operations investment as well? How do you reduce or manage the operational cost by creating consistency across your fleet? But also, how can you have that consistency when you're running in all of these environments? This is where Anthos really supports you with the services that we will talk about in a second. So we can have this control plane that will allow you to run the platform everywhere and you can manage it consistently while getting your velocity upshipping software. The first thing I want to highlight is deployment options with Anthos. Where can you run your Anthos clusters? As you can see here, of course on Google Cloud, but you can also run Google Kubernetes Engine, or a managed Kubernetes offering on AWS, Azure, and on-premises, both on vSphere and on bare metal servers. You can also attach any Kubernetes-conformant cluster to your Anthos fleet. So think of other Cloud's managed offerings like AWS's AKS or Azure's AKS, as well as OpenShift and Rancher and even DIY Kubernetes clusters. Next, let's look at some of the components of Anthos. This slide is not exhaustive, but it shows us some of the value-add services that Anthos has to offer in areas such as application development, service management, and config and policy management. In particular today, I'll be highlighting the Anthos Connect gateway. So let's briefly talk about some of these services before diving into our topic. Like I mentioned before, Google Kubernetes Engine. If you're running on GCP, it's the same GKE you've been using all along. But now you have the option to run it on other Clouds or on-premise, what? It gives you rich support for workloads, including Windows and mainframe applications. And it provides work class, GPU and TPU integration, as well as a record-breaking 15K node scalability for a single cluster. Next is unified management hub. You get a single pane of GlassView into your clusters, no matter where they're running. You're able to orchestrate and manage the clusters and workloads from one central location. Anthos Config Management is really one of my favorite Anthos products. It's a multi-cluster configuration manager that helps you have consistent policies across your clusters, whether they're on-prem or in the Cloud. It uses a central gate repository to manage access control policies or source quotas, namespaces, basically anything that you can do a cube control apply on. You can add to your repository. It's also declarative and continuous. So once you declare a new desired state, it will continuously check for changes that go against state and do drift management for you as well. And there's no need to rewrite any of your existing Kubernetes configurations to work with Anthos Config Management because you used your YAML or JSON file as is. Anthos Service Mesh is our managed service mesh offering that's based on Istio Open APIs. It gives you a lot of observability and telemetry out of the box. It allows you to do a lot of fine-grained traffic management and it also offers you a simpler approach to security, giving you things like MTLS very easily as well. And then last but not least is cloud operations and operational efficiency with logging, monitoring, and APM and having the same consistent experience for all your clusters no matter where they are. So all this sounds great. I just told you about Anthos, where you can deploy it, so many different options. I'm showing you that slide again now just to remind you how awesome that is. And no matter what clusters you deploy, Anthos can help you to manage them. That's great. So I'm not going to dive any deeper into the details. So what is the problem then? The problem is that using kube control or any client tooling for that matter on these Anthos clusters is annoying. And this is not a problem unique to Anthos. It's just the nature of the different clusters and the different cloud providers and the different networks that those clusters are on. So there are a few problems that you're going to face. The first one is you need to know what clusters exist. Remember, you've got clusters running anywhere and everywhere. You need an inventory of those clusters. Next, you need to be able to actually connect to them over a network. These clusters may be on different VPCs and clouds or on different VLANs on prem or in some use case, not even reachable without lots of work. And networking is messy. We see most of our customers leveraging a combination of jump hosts and VPNs to get them onto the same network as the clusters, just so that they can run a simple kube control command. And if that wasn't bad enough, you need a kube config for each one of those clusters. So what ends up happening, if you're lucky enough to have a good platform team, they'll end up curating these kube configs on prem and with Anthos multi-cloud. And across clouds, you'll need to download the respective CLI tools. So for GCP, AWS, Azure, depending on the cloud that you're attempting to attach to Anthos in order to get access to those Kubernetes clusters. And then finally, when you're done with all of that, you may need to authenticate differently with each of these clusters. GKE, you use GCP IAM. On prem, you may use OIDC with Active Directory. And in AWS, there's AWS IAM. Many people end up using Kubernetes service account tokens to authenticate users and automation with their clusters. So this is where the good news come in. And this is where Connect Gateway is here to help. Connect Gateway is a GCP-hosted service that provides a common front door to any Anthos-connected Kubernetes cluster. And on this slide, it says it's in preview, but the great news is very, very soon is going to be GA. Connect Gateway's main goal is to make using kube control and other Kubernetes clients like Client Go with any Anthos-connected cluster easy. This means that users and even automation, so your CI CD pipelines as well, can easily access any Anthos cluster running anywhere. It takes care of four things for you. The first one is organizing, because all Anthos clusters are registered and connected to GCP. You can run a simple command to list all your clusters. You can even attach metadata to them so that you can filter the list and see only the cluster that you care about. You no longer have to think about where you are in relation to the cluster from networking perspective. So it takes care of connectivity as well for you. All you need to do is point your kube control client at GCP and we'll take care of the rest. And we're also reusing the same connection that the cluster makes to GCP that we use for automation in the Cloud Console UI single pane of glass. The other areas it takes care of are authentication and authorization. So remember how each environment had potentially a different way to authenticate? Well, with Connect Gateway, you can just use your GCP IAM identity. We even provide a familiar command to get your credentials. For authorization, there are a few levels of that. But just to note, we do one level of authorization at GCP to validate whether a user can use Connect Gateway. The second layer of authorization is done on the cluster using standard Kubernetes RBAC. And because we're using a common identity with Connect Gateway, you can author authorization policies that apply to any cluster anywhere. And in this case, even in our demo, we're using Anthos Config Management and those RBAC policies. And that's our recommended approach. So you no longer have to think about organization, connectivity, authentication, or authorization. With that brief introduction to Connect Gateway, let's move over to our demo. And before that, I'll just show you what the personas are that we're going to be using in our demos today. So our first persona is Alice, who's an appra-service operator. She'd like to deploy something against the many clusters that are available for her, that are part of the environment she wants to deploy to. And in this case, there needs to be a lot of things that need to happen in order for her to have access to that, whether a Kube Config file or a CLI tool that's missing or a whole list of things that we were just talking about that need to be done. And this is where Charlie comes in. He's our admin who will set up Connect Gateway to make it so much simpler for Alice to accomplish the goals that she has. And in this case, all that Alice has to do from that point on is just get credentials for each one of those clusters and run all the Q control commands she wants to run. For our demo today, I'm going to be demoing on an environment that has four clusters in my environment, two that are Google Kubernetes Engine on GCP, and two that are EKS clusters, so AWS EKS clusters. They are all part of the same environment, all connected part of the same mesh with Anthos Service Mesh, and all managed through Anthos Config Management. I'll be deploying the online boutique application. If you're familiar, if you've seen any of Google's demos before, we like to use this application a lot. It's a microservices-based application, and I'm going to be using GitLab CI to deploy it on top of those four clusters that we just looked at. And with that, let's move to our console. So the first thing I want to show you is the fact that we can see all four clusters here. So the two GKE clusters, you can see type is GKE, and the two external clusters, and in this case, they are our EKS clusters. This is, like I mentioned, the Unified Management Hub. Over here you can see any Anthos cluster that is registered to your fleet, and you're able to see the workloads, services and ingress, you're able to manage those workloads, you're able to do a lot more than just see them through this interface here. Okay. So the first thing we're going to do in this demo is actually take on the persona of Charlie, the admin who's going to set up the gateway, and in this case, I have a script that I've created for that so I don't fumble with copying and pasting throughout the demo. It's interactive though, so we'll get to see what it's doing. The first thing that's going to happen is just setting up some environment variables and enabling some APIs to get us set up. And in this case, you see the project ID and the member email. And the member here is the service account for GitLab that we're going to be using to deploy to the four clusters. Then we're going to add some required roles or permissions to the service account. Like I mentioned, there are two layers of authorization, one at the GCP IAM level, which is what we're doing by adding those roles to the service account. And then the other layer is through our back with cluster roles and cluster role bindings on the Kubernetes clusters themselves. Now, instead of going and applying the kubectl apply on the four clusters that I want to apply the configuration on, I'm going to leverage Anthos config management, which like I mentioned, there's a Git repo that I commit some configuration and policy to and then it makes sure to apply it to all the clusters that are registered to it. So let's start with that. We're just going to create a couple of files that have the cluster roles and role bindings. And then once that is completed, we're going to commit those files to the config repo. And then we're going to check that those configurations got applied correctly. Now, if we go to the config repo, we can check the two files that we just committed. I've been permission.yaml and impersonate.yaml. And both of these, like I mentioned, have cluster roles and role bindings necessary for setting up the gateway. We can also go to the Anthos config management screen and see the syncing as it's happening. So these three clusters are synced and one is in progress and will be done soon. Now, next we can actually move on to GitLab, where now I'm going to take on the persona of Alice, who is an app or service operator wanting to deploy to those four clusters. So I'm going to run the pipeline because it's going to take a minute or two. And then I'm going to move on to show you the GitLab CI file. So let's do that. In this case, what I want to highlight is that it's a very simple step from then on. It's so easy now for Alice to do her job that Charlie already did the setup for her. All of the above is just some setup for credentials and authenticating with GCloud and then a couple of steps for cloud operations that are not relevant to this demo. But in this case, I want to highlight one of these steps here. And you see that it's repeated four times for the four clusters. But basically, it's just one command to authenticate and get the credentials. So essentially create a Q config entry for the cluster. And then the next command is a very simple Q control apply of the manifest that I want to apply to the cluster. So let's go back to our pipeline and take a look. I see it's taking a minute to start, but it should be going in Let's just give it a second. Okay. So let's look, it looks like it's already deployed to a couple of the clusters. You see here, it says that a new Q config entry has been created for this cluster and then it deployed and it repeats for each one of our four four clusters as the commands are there. Okay. Let's head back and ensure that we actually did deploy. So going to Kubernetes engine workloads loads. I can check the ASM ingress gateway and in this case, the application sorry, services in ingress. In this case, the application will be available at the IP address for the ingress gateway because of how Anthos service mesh is set up. And here we are online boutique is deployed lots of fun products here to shop from. But essentially all we needed to do as an app or service operator with the cluster was run one command to authenticate and then a second command to just the Qt control command that you would normally run. And with that that concludes my presentation and demo for today. I hope you enjoyed it and a lot more information is available in the documentation so please feel free to go and look at the connect gateway docs in addition to that, like I mentioned it's going to be GA very soon so you can actually use it for your production workloads. Thank you so much.