 Hello, and welcome to this Kubernetes on Azure overview. In Azure, our goal is to make Kubernetes an enterprise-grade platform by design while building on an open-source foundation that gives customers the maximum degree of flexibility. We also look to take those innovations and enable them across both Cloud and Edge with Azure Arc. In this session, we'll be doing a number of demos, looking at security best practices and threat detection in Azure Security Center, how you can do secrets management with the CSI secret store driver, apply policy and governance with the open policy agent in Azure Policy, how you can consume Kubernetes best practices of Azure Advisor and troubleshoot with Azure Diagnostics, and then how you can manage heterogeneous Kubernetes environments with Azure Arc. So let's dive in. When it comes to security, a good place to start is Azure Security Center, which provides deep integration with AKS. To begin, you'll want to look at ASC's assessment of your cluster's security posture, checking for security best practices. In this case, ASC has identified that we have not limited network access to the Kubernetes API server, creating a broad attack vector. In each case, ASC provides helpful pointers to documentation, which help you take action on the recommendation. In this case, by providing a set of trusted-sider ranges to access the API server. Once your environment is in production, you'll want to be alerted about potential threats. Azure Security Center is continually monitoring the Kubernetes audit log, looking for suspicious activity that may suggest an attack. In this case, Azure Security Center has identified that a pod is accessing a sensitive host volume. It provides an assessment of the risk and suggested remediation steps. Again, pointing out to the Azure documentation to provide suggestions for next actions. A critical part of securing any environment is proper management of secrets. In partnership with HashiCorp, Azure has built the CSI secret store driver, which enables mounting compatible key management stores into a Kubernetes pod as a volume. Let's take a look at how that works. Here I have an Azure Key Vault Store that includes several secrets I'd like to use within my applications running in AKS. The secret store project is deployed as a daemon set and includes two components. The secret store driver itself, along with a compatible key store, in this case, Azure Key Vault. To mount a Key Vault into my application, I'm going to create a secret provider class, one of the custom resource definitions managed by the secret store project. I specify that I'm using the Azure provider and to find a way for the application to authenticate to the Key Vault Store. In this case, I'm using the AAD pod identity project, which allows me to specify unique AAD identities for pods running in Kubernetes. I specify the Key Vault name and tenant, and finally the objects that I want to pull from the Key Vault. Now let's look at the pod spec for the application which will be using the secret. First, I include a label to match this pod to the appropriate AAD identity, which has access to my Key Vault. Then I define an inline volume, which references the secret store driver and the secret provider class I just created. Finally, I mount that volume into my pod so that the specified secrets will be readable as files. Now we can go ahead and create those two resources, starting with the secret provider class and then the pod. We'll wait for that pod to get up and running. Once it does, the secrets that we are pulling from Key Vault will be available as files that it can read from the mounted volume. Now that it's up and running, we can exec into that pod and just run a list command on the mounted volume. We can see the storage password available as a file, and in fact, we can even cat out that password and see the password from Key Vault. A natural complement to security is policy. Organizations have all kinds of different policies for a wide variety of business needs, from compliance to reporting to cost management. The Open Policy Agent project, part of CNCF, offers a powerful and flexible way to manage that myriad set of policies. In Azure, we've baked the OPA directly into Azure Policy and AKS. Enabling Azure Policy for AKS is simple and can be done through the portal or the CLI. In a few minutes, the policy add-on will be installed and the cluster will be ready to apply policies. Azure Policy comes with a set of built-in policies that are commonly used by organizations that are running Kubernetes in production. Let's take a look at how we can assign one of those policies to our AKS clusters. Within the assignments UI, I'll choose Assign Policy. Now when I create an assignment, I have the opportunity to create a scope, which means choosing the Azure subscription and optionally the resource group that I want to have that policy apply to. Then I can optionally choose a set of excluded resources if there are resources within that scope that I don't want to have the policy apply to. Next up, I choose the policy definition. This is where we can see the set of built-in policies that are available. If I search for Kubernetes, I'll see a few dozen built-in policies with descriptions of the definition of that policy. In this case, I'm going to choose limiting load balancer services to only be internal load balancers, so not exposing any external IPs. I can give the policy assignment a name. Then under the parameters tab, I have a couple of important options. First, I can choose the effect of the policy. By default, this particular policy is set to deny, which means that any service I try to deploy that would create an external load balancer will be blocked by the gatekeeper admission controller. Now, in this case, we're adding this policy to an existing cluster. We want to make sure that we're not breaking any existing workflows that our developers may have. I'm actually going to change this to audit, which means that any resources that are out of compliance will be audited, will be visible within the Azure Policy UI. They will not immediately block deployments of those resources. I can also choose a set of namespaces within my cluster that I want to be excluded from this policy and there's a set that are built in. Then I'll go ahead and create that assignment. Now, once that policy is created, I can go over to the compliance experience and find the assignment that I just created. This is where I'll be able to get a view of my compliance state relative to that policy definition that I just deployed. Now, initially, this is going to be in a not-started state. It'll take a few minutes for the policy to get deployed to that set of clusters, and for the audit to run, and for those results to be reported back up into Azure Policy and visible in this UI. Then on an ongoing basis, that audit will happen every 15 minutes. Within 15 minutes, you'll always have up-to-date state of the policy and compliance within your cluster. After a few minutes, I can hit Refresh here, and see that I am out of compliance because there is this one resource that is not compliant with that policy that I just created, and that is this external LB service within the policy demo cluster. I can pop open the Cloud Shell and take a look at that cluster, and see if indeed there is an external load balancer service. There it is, that external LB service that was referenced within the policy experience. Indeed, you can see that it has been assigned an external IP, so it is in violation of that policy. Now, there's no doubt that Kubernetes has a lot of powerful capabilities, and there are many patterns emerging in the community about how to use them. But those may not always be obvious to newcomers. With Azure Advisor, we can make personalized recommendations for best practices you may want to consider based on our experience working with thousands and thousands of customers. In this case, we've detected a few improvements that could be made to this cluster, including the application of pod disruption budgets, a way of ensuring application availability is maintained during voluntary disruption events like cluster upgrades or scaling operations. Advisor includes links out to Azure documentation to make those recommendations actionable. We can provide that same level of analysis and insight when things go wrong. The AKS Diagnostics tool distills the learnings from thousands of customer support cases into a common set of troubleshooting steps grouped by category. In this case, let's look at cluster insights. The AKS Diagnostics tool runs a series of checks based on the telemetry we have about the cluster and common issues that customers encounter. In this case, two potential issues are detected, including the presence of unkilled pods or pods that are continually restarting due to lack of memory. The tool provides information about the resources in question and the timeframe when issues were encountered as well as links to documentation for how you can address the issue. So far, we've exclusively been looking at AKS-based clusters, but most customers are supporting a much broader set of environments and looking for a common platform across them. With Azure Arc, you can easily connect any conforming Kubernetes cluster into Azure, then view and manage it alongside your AKS clusters. So let's take a look at how that works. As a simplified approximation of an on-premises environment, I'm going to create a kind cluster here on my laptop. Kind stands for Kubernetes and Docker and is a simple way to run a one node Kubernetes cluster inside of a Docker container. Once the kind cluster is created, I can use the Azure CLI to connect that cluster to Azure. This will create a resource ID for the cluster in the Azure Resource Manager, allowing it to participate in many ARM-based experiences, including being visible and manageable within all ARM clients. As an example, I move over to the Azure portal and hit refresh. I'll be able to see this new laptop cluster that I created right alongside all of my AKS-based clusters. I can also see some metadata about the cluster, including the number of nodes and the Kubernetes version. As the number of clusters and environments grows, it can become difficult to manage configuration across all of them. Azure Arc for Kubernetes builds on the GitOps pattern to help manage this. The GitOps pattern involves using a Git repo as the source of truth about cluster configuration and then running a control loop to continually seek that goal state, much like Kubernetes itself. In this case, I'm creating a new configuration for this cluster that will lay down a set of namespaces to be deployed. This process can also be automated using Azure Policy. Note the use of Flux, a CNCF project as the operator type. With that configuration created, the Flux operator deployed in the cluster will establish a link to this Git repo, which defines a set of namespaces to be created. Within just a few minutes, those namespaces will be created in the cluster. Subsequently, the Flux operator will keep the cluster up to date with the goal state that was defined in the Git repo. Okay, so hopefully that's given you a good overview of the enterprise-grade capabilities available with Kubernetes on Azure and the connection those capabilities have back to the open-source community. Of course, we've only just scratched the surface in this session, so I'd encourage you to check out some of the resources listed here to learn more.