 The CI-CD tunnel, which leverages the GitLab agent for Kubernetes, enables users to access Kubernetes clusters from GitLab CI-CD jobs. In this short video, we'll review how you can securely access your clusters from your CI-CD pipelines by using generic impersonation, a capability introduced in GitLab 14.5. In addition, we will show the activity list of the GitLab agent for Kubernetes, a feature introduced in GitLab 14.6 that can help you detect and troubleshoot faulty events. For this example, we create a GKE standard Kubernetes cluster and name it CSVEDRA GA4K cluster. We select the zone and version 1.21 of Kubernetes and ensure that our cluster will have three nodes. We leave the security and metadata screens with their defaulted values and click on the Create button. Let's proceed now to this top-level group, which contains three projects, which we will use to show impersonation with the CI-CD tunnel. You can do this at the project or group level. In this example, we will show setting impersonation at the project level. Project GA4K will configure the GitLab agent for Kubernetes and also set impersonations with the CI-CD tunnel. Project sample application will use the CI-CD tunnel managed by the agent to connect to the Kubernetes cluster and execute a pipeline using different impersonations. Project cluster management will also use the CI-CD tunnel to connect to the cluster and install the ingress application on it. Not only does the CI-CD tunnel streamline the deployment management and monitoring of Kubernetes native applications, but it also does it securely and safely by using impersonations that leverage your Kubernetes clusters' RBAC rules. Project GA4K contains and manages the configuration for the GitLab agent for Kubernetes called CSEVEDRA agent K. Looking at its config.yaml file, we see that the agent points to itself for manifest projects. But most importantly, it provides CI-CD tunnel access to two projects, sample application and cluster management. This means that these two projects' CI-CD pipelines will have access to the Kubernetes cluster that the agent is securely connected to. Project sample application has a pipeline, which we will later execute under different impersonations. And project cluster management has a pipeline that will install only the ingress application on the Kubernetes cluster as configured in its helm file.yaml file. Let's head back to project GA4K and connect to the Kubernetes cluster via the agent. We select agent CSEVEDRA agent K to register with GitLab. This step generates a token that we can use to install the agent on the cluster. We copy the Docker command to our local desktop for later use. Notice that the command includes the generated token, which you can also copy. From a local command window, we ensure that our connectivity parameters to GCP are correct. We then add the credentials to our kube config file to connect to our newly created Kubernetes cluster CSEVEDRA GA4K cluster and verify that our context is set to it. Once this is done, we can list all the pods that are up and running on the cluster. Finally, we paste the Docker command that will install the GitLab agent for Kubernetes to this cluster, making sure that its namespace is GA4K agent. We list the pods one more time to check that the agent pod is up and running on the cluster. The screen will refresh and show our Kubernetes cluster connected via the agent. Clicking on the agent name takes us to the agent's activity information page, which lists agent events in real time. This information can help monitor your cluster's activity and detect and troubleshoot faulty events from your cluster. Connection and token information is currently listed with more events coming in future releases. Let's go ahead and deploy the Ingress application to the Kubernetes cluster from project cluster management. Let's make a small update to the project pipeline to launch it. Once the pipeline launches, we navigate to its detailed view to track its completion and check in its log that it was able to switch to the agent's context and successfully ran all the installation steps. For further verification, we list the pods in the cluster and check that the Ingress pods are up and running. Before we start the impersonation use cases, let's start trailing the agent's log file from a command window and increase its login to debug. Find grained permissions control with the CICD tunnel via impersonation allows you to leverage your Kubernetes authorization capabilities to limit the permissions of what can be done with the CICD tunnel on your running cluster. It also lowers the risk of providing a limited access to your Kubernetes cluster with the CICD tunnel. It can also segment find grained permissions with the CICD tunnel at the project or group level. And finally, it can control permissions with the CICD tunnel at the username or service account. By default, the CICD tunnel inherits all the permissions from the service account used to install the agent in the cluster. Let's see this first use case. We head to the project sample application whose pipeline will be run under the agent's impersonation, which is the default. Notice that the pipeline has a test stage and a test job. It sets the variable cube context first, loads an image with the version of cube control that matches the version of the Kubernetes cluster and executes two cube control commands that access the remote cluster via the agent. We run the pipeline manually and we see in the test job log output that the context switch was successful and that the cube control commands executed correctly. Let's now impersonate the CI job that accesses the cluster. For this, we modify the agent's configuration and add the access as attribute with the CI job tag under it. As we save the updated configuration, we verify in the log output that the update has taken place in the running agent. We execute the pipeline of the sample application project again and like before, verified in the job log output that the context switch was successful and that the cube control commands executed correctly. The last use case is the impersonation of a specific user or system account defined within the cluster. I have pre-created a service account called Jane on the Kubernetes cluster under the default namespace. And Jane has been given the permission to do a get, list and watch on the cluster pods as you can see by the output in the command window. Remember that the service account GitLab agent and their namespace GA4K agent was created earlier when we installed the agent by running the Docker command. In order for the agent to be able to impersonate another service account or user, it needs to have the permissions to do so. We do this by creating a cluster role impersonate for impersonating users, groups and service accounts and then create a cluster role binding, allow impersonator to give these permissions for the default namespace to the agent, GitLab agent in the GA4K agent namespace. We then edit the agent's configuration and add the impersonate attribute and provide the service account for Jane as the parameter for the username tag. As we commit the changes, we check the log output to verify that the update has taken place in the running agent. Since we know that Jane has the permission to list the running pods in the cluster, let's head to the project sample application pipeline and add the command kubectl getpods all namespaces to it. We commit the update and head over to the running pipeline and drill into the test job log output to see that the context switch was successful and that the kubectl commands executed correctly, including the listing of the running pods in the cluster. In this short video, we reviewed how you can securely access your Kubernetes clusters from your CI-CD pipelines by using generic impersonation. In addition, we showed the activity list of the GitLab agent for Kubernetes, which can help you detect and troubleshoot faulty events from your cluster. I hope you enjoyed this video and until next time.