 In this webinar, I'm going to demonstrate how easy it is to run and secure a hybrid Kubernetes cluster. Hello, my name is Reza and I am a developer advocate at Tigera. This webinar is divided into five sections. I'll start by giving you an overview of Project Calico. Then I'm going to briefly talk about hybrid clusters, Windows workloads, and show you how you can leverage your current Calico knowledge to secure such an environment. But don't feel overwhelmed if you're just starting your cloud journey, because at the end of this webinar I'll give you all the resources that you might need to run the same environment, both locally or in the cloud. And I'll share with you a secret on how to become a Calico expert. So let's start by checking out what is Project Calico. Project Calico is an active community about cloud networking and security. Feel free to join our community using these social networking handles and drive the conversation where you see a need for a change. Or seek help for your Calico journey from developers who are actively working on the project. And if you're already a Calico community member, you might find the Calico Big Cat Ambassador Program a very interesting next step. Okay, but I didn't explain what Project Calico is. Project Calico is a community behind a pure Layer 3 approach to virtual networking and security for highly scalable data centers. We offer Calico, a free and open source networking and network security solution for containers, virtual machines and native host-based workloads. Calico supports multiple architectures and platforms. Calico is designed to be modular and its pluggable data plane approach offers EBPF and Linux IP tables for Linux environments and host network service or HNS for Windows environments. This modular architecture makes Calico a great choice for any environment and gives you the required tools to be in charge of your data. In fact, Calico, EBPF and HNS are some of the foundational technologies that provides networking, security, observability, image assurance and runtime threat defense for our enterprise solutions. Now that we have a better understanding of Calico, let's talk about hybrid clusters. It is difficult to talk about Kubernetes without mentioning Linux but Kubernetes supports a broad range of platforms. For example, Kubernetes officially supports Windows and if you're now wondering, yes, you can containerize your Windows applications to run them at a scale by using the same tools and manifests that you are already using for your Linux containers. But before jumping into installation steps, there are a few requirements that we need to discuss. First of all, Windows nodes can only be workers in a Kubernetes environment, which means that you will need a Linux control plane node to run the Kubernetes system applications in your cluster. You should also keep in mind that containerization is a new concept for Windows. So make sure you are using a recent copy of Windows, preferably 2019 or higher. Another thing to consider is the version of your Kubernetes cluster. If you want to run a hybrid environment, make sure that you are using Kubernetes version 1.21 or higher since Windows support is stable in these versions. You will also need a capable CNI to provide networking and security features. Since Linux and Windows applications are not compatible and each requires a different environment to run, it is important to choose a CNI that can run natively on both platforms. Okay, now let's talk about Windows workloads. Linux and Windows containers are very similar. For example, you can run a Windows container in both on-prem or a cloud environment, which will allow you to create an agile development environment in your enterprise locally or deploy your application at a scale in the cloud. Windows containers can be lightweight, which can help you to minimize the attack surface by removing unnecessary libraries from your production environment. And like Linux, they offer process isolation, which can efficiently divide your hardware resources and save a lot of costs for you and your company. However, since Linux and Windows operating systems are different on a fundamental level, some of the capabilities that we take for granted in a Linux environment can be a bit more complicated for Windows containers to achieve. For example, Windows-based images range from a full implementation of Windows APIs and services to a minimal version with a small footprint. This is an important fact to consider since your cloud builds are directly affected by the amount of storage that you request from the cloud provider. On top of that, a container based on a huge image might take some time before it is fully downloaded and extracted in your Windows container runtime environment, which can delay the initial start of your workloads. Another thing to keep in mind when working with Windows containers is the kernel compatibility. Windows containers are highly dependent on the host kernel, so in the container build process, we have to be very cautious about choosing a base image that matches the underlying host. Windows offers two methods of isolation, the Hyper-V and the process isolation modes. Kubernetes only supports process isolation. In this mode, processes are run concurrently on a host in different name spaces. This is very similar to how Linux establishes isolation in a container environment, if you are familiar with that concept. Now, how to secure these Windows workloads? As we all know, Kubernetes doesn't enforce policies on its own. It actually delegates the responsibility to the CNI plugin. And depending on your CNI capabilities, you will have unique tools and features to create a secure environment. For a hybrid cluster, you will need a CNI that supports both Linux and Windows platforms. This is because the CNI needs to have compatible applications that can support the host operating system and implement software-defined networking by interacting with the host networking capabilities. Alright, it's time for the demo. First, let's create a hybrid cluster by using the Azure Kubernetes Service, or AKS. Let's start by checking the subscription that is set on my Azure CLI. Next, I need to register Enable AKS Windows Calico feature for my account. This is located in Microsoft.ContainerserviceNameSpace. Before moving to the next step, we need to ensure that this feature is in a registered state. Now, I just need to add the newly configured provider to my subscription, and every Windows node will come with a pre-configured Calico installation afterward. Cloud resources in Azure need to be associated with a resource group. For this demo, I'm going to create a resource group in the Australia East region. I've chosen Australia East because I'm using a free account for this demo, and since this region is usually not crowded, I can pretty much create any resources in it without restrictions. The AKS creation command has many options and can be a bit daunting to explain, so I'm going to focus on two important parts of it. First, it is mandatory to select Azure as the network plugin when you're creating an AKS. Since Windows nodes are only available when you're using the Azure CNI. Second, the network policy engine here must be set to Calico to match the Windows nodes. Now that the AKS cluster creation is completed, I can use the Azure CLI commands to export the coop config file and remotely access to the cluster API server. Let's check out the cluster and its participating node. As expected, there's only a Linux node in this cluster. Let's change that by adding Windows to the mix. This time, I'm going to use the AKS node pool sub-command to create a node pool and add a Windows node to my cluster. A quick note here, the AKS node pool sub-command accepts an OS type argument. There's also a custom AKS header that can change some of the default behaviors of the AKS cluster, which I'm using to change the default container runtime environment from Docker to container D. Let's check out the nodes again. Perfect, we have a hybrid cluster. Now I'm going to deploy a simple Windows workload that offers a website and a load balancer service. The load balancer service will acquire an external IP address that can be accessed via the Internet. But before using it, we have to make sure that the workload pod is in a running state. Seems like the container can reach the Internet, which is a huge security risk. So let's change that by leveraging the Calico Global Network Security policies. AKS installs Calico through the Calico operator, which allows us to use the coop-cattle-get-tiger status command to check which Calico components are installed in this cluster. It seems like only Calico is installed. Let's change that by adding the API server. Calico resources are under the project calico.orgv3 API group, and they cannot be accessed by coop-cattle normally. And this is why we need to either download Calico Cattle or install the Calico API server to interact with these resources. Let's use the tiger status command again to check the installation progress. Great. Seems like both components are available now. Now I'm going to copy the default deny example from the project Calico dockets to restrict my pod. Let's check the web UI and see if the container can still connect to the Internet. Excellent. The container cannot reach the Internet. An important thing to remember while using a cloud provider is that resources are built to your account with a pay-by-the-minute model. So make sure you always delete the resources that you don't need to avoid unnecessary extra charges. Awesome. It worked. Now I don't know about you, but the AKS cluster deployment seemed a bit magical. So let's dive into what actually happens behind the scenes by creating a hybrid cluster in a local environment. Alright. I've got a local Linux machine with all the necessary Kubernetes packages. And all I need to do at this point is to instruct coop-adm to initiate my cluster. Similar to AKS, after coop-adm is done, I'm going to copy the kube-config file to the current user home directory to access the cluster API server. Since this is a local cluster, I'm going to use Calico for both networking and policy enforcement. To install Calico, I need to run the Tigera operator manifest and create an installation resource. Let's use the Tigera status command to check the components. Alright. Like AKS, I only installed Calico. Let's carry on by adding the API server. Installing API server is pretty easy. You just need to create an API server manifest in the operator.tigera.io v1 tree. Let's use the Tigera status command to watch the installation process. I've got an idea. Let's download the Calico Kettle binary and check out how that works. Calico Kettle can use the same coop-config file that we copied at the end of coop-adm initialization step to change the Calico configurations. For example, by using Calico Kettle, we can change the strict affinity value and prevent Linux nodes from borrowing IP in a hybrid setup. Please keep in mind that this is a mandatory step for any hybrid cluster that uses Calico for networking. Another thing to keep in mind is that Calico implements IP-IP by default to establish communication between two nodes. And since Windows do not offer IP-IP support natively, we need to change this behavior in a hybrid cluster. To do this, I can easily patch the installation manifest and the operator will take care of the rest. Now that everything is set in our Linux node, let's add Windows to our environment. Now I'm going to quickly skip the password and launching the PowerShell part and go straight into enabling the SSH on my Windows node. You can do this by adding the open SSH server Windows capability to your Windows and start the SSH service. Next, I'll set the SSH service to start automatically as Windows tend to like to be restarted very often. Now if I use the Windows IP address, I should be able to SSH into my node. Inside my node, I'll start PowerShell and download the container de-installation script from the Project Calico documentation website and execute it to enable container feature on my Windows. Good thing that I set the SSH startup type to automatic since my Windows is crying for a restart. After Windows fully restarts, I'm going to SSH into the node and run the container de-installer that we previously downloaded with a container de-version argument to specifically point out which version of the container de-should be installed on my system. After a successful container de-installation, I'm going to create a directory called K in my Windows Drive which is usually labeled as Drive C. This directory will host Kubernetes binary files that the Calico Windows installer will download in the next step. Next, I'm going to write down my Linux IP address and use it with SCP to transfer the Koop config file from my Linux node since Koop config file is required by the Calico installer to automate the installation process. Alright, we're almost set. Now it's time to download the Calico installation script from the Project Calico documentation website. Before we can start the installation process, we have to add two environment variables to our system. These two variables will show the installation script, where to look for CNI binary files, and where to store the related CNI configurations. Since this installation updates the networking configurations of my Windows host, there's a potential risk of losing my SSH connection. To solve this problem, I'm going to run the installer from my Windows console. Notice how I specified which version of Kubernetes must be installed on my system. This is important in a production environment since mismatch in the Kubernetes version might create version SKU problems. And as I said before, we need to specify which Calico backend is going to be used for our Calico node-to-node communications. While the installer is busy with the installation, I'm going to set a watch in my Linux node to notice when my Windows is joined to the cluster. After installer finishes, you should be able to find Calico binaries inside the Calico Windows directory in your Windows Drive. Navigate inside the Kubernetes directory within the Calico Windows folder and execute the Kube service script to complete the installation. Now I just need to restart KubeNet and Kube Proxy services to add this Windows node to my cluster. Now that I have a working hybrid cluster, I'm going to deploy the same workload that we used earlier for the AKS demo to verify the VXLAN encapsulation. Let's check that the Linux node can browse the web UI by issuing a CRL command. I've got a response with a 200 header, which means that the web UI is accessible from the Linux node. Now let's start a packet capture and filter the VXLAN packets. As expected, communications between two nodes is encapsulated with the VXLAN technology. Great! Let's quickly move on before something goes wrong in the demo. If you like to run the demo clusters, check out my GitHub repository. Link is at the top of this slide. And don't be shy to contact me if something goes wrong. I'm reachable at Calico User Slack and these social places. By the way, since this was a recording, I had the opportunity to take out all my mistakes and save some time. But don't get discouraged if you run into any troubles. As I mentioned in the beginning, there is a thriving community around Calico that will be happy to accompany you in your cloud-native journey. At Tigera, we also offer a lot of free materials and courses on different topics, such as EBPF, AWS, and Calico itself. And we recently launched a dedicated Azure course in our Academy website. So if these topics interest you, I would highly recommend that you check out our free courses by either scanning the QR code on the screen or by visiting the academy.tigera.iourl. As promised, these are the resources that I use to create this presentation. And that's it for this webinar. I hope you have enjoyed it and I'd like to thank you for viewing.