 Hello everyone, welcome to another CNCF Undemand recording. In this video, we're going to explore what is the best way to install Calico and give you some insight into what happens when you install a CNI such as Calico in your Kubernetes cluster. But before we get started, let's get to know each other a little bit more. My name is Reza and I'm a developer advocate at Tigera. Tigera is the company behind the open source project Calico where we do all kind of fun stuff to revolutionize Kubernetes networking and security. I used to be a security consultant, a system engineer, network administrator and a full stack developer. Currently, I'm advocating for a community that I love. I'm always eager to learn new stuff and open to suggestions. So let's connect and exchange ideas. This presentation is divided into five sections. First, I'm going to talk about Project Calico and give you a brief overview of what it is that we do at Tigera. Then I'm going to talk a bit about Tigera operator and our motivation behind making it an open source project. Then in a short demo, I'm going to demonstrate how to install Calico in a Kubernetes cluster by using the Tigera operator. After that, we will explore some basic Kubernetes networking and container networking interface concepts to get everyone up to speed for our journey into the inner working of CNI installation. Where I'm going to demonstrate how to install a CNI manually in a Kubernetes cluster. If you are new to cloud networking, don't worry, I'll get you covered. There's a slide at the end of this presentation with all the links and information that you might need for your adventure. So what is Project Calico? Project Calico is an active community about cloud networking and security. We have a thriving community with more than 300 contributors and 8,000 Slack channel members. Feel free to join our community using these social networking handles and drive the conversation where you feel a need for a change or seek help for your Calico adventure from developers who are actively working on the project. Our Slack channel, slack.projectcalico.org, is an inclusive environment dedicated to Calico and support for our open source community members. Project Calico offers a pure Layer 3 approach to virtual networking and security for highly scalable data centers. Calico is a free and open source networking and network security solution for containers, virtual machines, and native host-based workloads. Calico supports multiple architectures and platforms, such as X86 and ARM64, so you can basically install it on any environment. Calico is designed to be modular and its pluggable data plane approach offers EBPF and IP tables data planes for Linux environments and host network service or HNS for Windows environments. This modular architecture makes Calico a great choice for any environment and gives you the required tools to be in charge of your software-defined networking traffic. In fact, Calico, EBPF, and HNS are some of the foundational technologies that provides networking, security, observability, image assurance, and runtime threat defense in our enterprise solutions at Tigera. Now, let's start by our motivation behind promoting the Tigera operator as the recommended way to install Calico. A lot of people trust Calico for securing their environments. To support such a wide variety of platforms and needs, we had to develop a unified way to install and maintain Calico. And that is why we transition the Tigera operator, our enterprise installer, to support the Calico open source installation process. The beauty of operator installation is that it's just a one-liner command. After installing the operator, you can configure all Calico components from your installation resource within the Kubernetes environment. The operator then monitors the installation resource to make sure your Calico is always configured correctly. The operator provides a simple way to troubleshoot each Calico components that are installed on your cluster. And just like Calico, the Tigera operator is free and open source. In fact, you can use this QR code to check its GitHub page and get involved with its development and shape its future. Tigera operator is based on the operator framework. The operator framework is an open source SDK that allows you to put the operational knowledge into a software, and it's the foundational software that the operator is based on. The best part about an operator is its integration with Kubernetes, which allows you to install, configure, maintain, or upgrade your software. An operator can create or modify pods, deployments, config maps, or services that are required for your cloud-native application by providing a single interface to manage and deploy them. If you're interested to know more about the operator framework, use this QR code and head to their web page. Now that we have a basic understanding about Tigera operator, let's use it to install Calico. So first, I'm going to use kind to provision a three-note cluster in my local computer. The only thing worth mentioning here is that I'm using the disable default CNI value in order to not provision any CNIs while creating the cluster. Now, because my cluster doesn't have any CNIs, my nodes should be on a not ready status. I can verify this by running a coopcatl getNotes command. Alright, to change my nodes status to ready, I just need to install a CNI. So let's go ahead and apply the YAML file for the Tigera operator. Inside the Tigera operator manifest, there is a Tigera status capability that will be added to the Kubernetes API server and could be query to get information about the state of Calico components. Alright, everything is set. Now, all I need is to create an installation resource and tell the operator how to configure the Calico. And as you can see, I can use the Tigera status command to get some information about the states of deployment that my Calico and API server will go through. That is all that you will need to install and run Calico in your environment. From here, I'm going to explain what happens in your Kubernetes cluster and how Tigera operator does its magic to install a CNI. Since Kubernetes has a modular approach to networking, it delegates this responsibility to CNI or container networking interface plugins. Prior to version 1.24 of Kubernetes, Kubelet was in charge of CNI arguments and you could have adjusted it by using config-cni-dir or config-cni-bin arguments. And these arguments would have allowed you to change the location of CNI config or binaries in your cluster. In version 1.24, however, this has changed and your container runtime interface is in charge of managing CNI config and binaries. There are a lot of CNIs out there and each one provides different set of features. For example, these are a few features that Calico provides for your environment. Calico has its own IP address management plugin, which allows you to create different IP pools, allocate static IP to endpoints or tunnels, and a lot of other cool things. Calico uses BERT to implement BGP or border gateway protocol routing between your cluster resources and other BGP-capable devices in your network. You could take advantage of this feature by pairing your on-prem environment directly to your Kubernetes cluster. Calico extends the Kubernetes network policies and allows you to write cluster-wide security policies to secure your cluster. It also offers a range of new selectors that can tailor security policy to target any resource from inside or outside of your cluster. Calico has a pluggable data plane architecture and multiple data planes. These data planes are based on IP tables, EVPF technology, FDIO or Cisco's VPP, and Windows HNS that allows you to be in charge of your software-defined networking traffic. Calico offers multiple networking overlays such as VXLAN and IPIP that can help you to establish networking in restricted environments such as cloud providers. Calico integrates with other awesome open-source projects like Istio to establish application layer policy enforcement, service mesh and observability. Calico has integration with WireGuard for node-to-node or pod-to-pod traffic encryption. For busy clusters, Calico deploys Tyfa that holds a cached version of Kubernetes API server information that will be used by Calico components instead of directly querying your Kubernetes database. Okay, it's time to manually install the CNI on our Kubernetes cluster. So again, I'm going to provision a three-node cluster by using kind. However, this time, I'm also going to add a script file that I wrote to help with the CNI installation, but more on this script file comes later. Okay, now that we got a cluster, let's go and verify that our nodes are not ready. Now, last time we used kubectl get nodes after our cluster provisioning to verify that the Kubernetes nodes are not running. This time, let's go inside the control plane node and check this from the CRI perspective. Okay, just like last time, our CRI cannot find any binary files or configs inside the binary directory or config folder. All right, so let's go ahead and apply the Calico CRDs. These CRDs will help Kubernetes to know what are the capabilities that our CNI will offer. And since these CRDs are in the project calico.org API group, we will need Calico Cattle to interact with them. All right, after installing Calico Cattle, we're going to use it to get information about IP pools that are currently inside our cluster. As expected, we don't have any IP pools. Now, let's use the cluster info command to get information about our cluster sider and use it to create an IP pool. From here, let's SSH into our control plane node and continue our progress from there. The CNI requires a certification in order to talk to the Kubernetes API server. To solve this issue, I'm going to create a folder and issue a certificate request inside it. Now that I have a certificate request and a private key, I need to use my Kubernetes certificate authority key to sign and actually create a certificate from my request. Keep in mind that you can change the expiration time of your certificate by changing the days value in the previous command. Next, I'm going to store the API server IP address into an environment variable and then use coop cattle in order to generate a coop config file for my Calico in order to be able to talk to the API server by using the credentials that we just created. After creating an identity, it's always good to tie it to some permissions so that our credentials cannot do anything more than they're supposed to. Here, you can see the cluster role that I'm going to deploy for my Calico CNI in order to only be able to access some parts of the information that Kubernetes API server offers. Alright, so inside Kubernetes, a cluster role on its own cannot do anything. It's more than just pointing what should be done, but in order to actually make it affect the cluster, we need to create a cluster role binding that ties the permission and the entity together. We're almost done. So let's download Calico IPAM and Calico CNI binary files and store them inside the OPT CNI bin folder. Next, we have to copy the config file that we created by using coop cattle inside the Etsy CNI net.d configuration folder and set the right permissions for it. Now that we have the permissions in place and actually copied the kube config file inside our config directory, let's go ahead and create a conf list. A conf list will be used by Kubernetes or kubelet to tell the cluster about capabilities that our CNI will offer. Now, if we go ahead and issue a coop cattle get notes command, what do you know? The control plane is now ready for the action. From here, we just have to go into each node that are participating in our cluster and do the same procedure again and again. All right, so I'm going to use the power of anything in order to speed up the repetition step. So if we go ahead and issue a get notes command again, we will see all nodes are now in a ready state. However, our pods are not able to acquire any IP addresses. We can verify this by issuing a coop cattle get pods command. As you can see, both core DNS pods are now stuck in the container creating phase and that's because they cannot get an IP address. This is because we haven't installed Calico node, which contains the Felix or the brain of Calico. So we have the CNI and we have the IPAM binary, but they cannot communicate with the Felix. All right, so let's go and create an identity for our Calico node by creating a service account. Now let's create another cluster role for our Calico node and then tie it with a cluster role binding to the service account. All right, everything is set. Now we can basically just deploy the Calico node demon sit and wait for it to come up. After this phase, if we go ahead and issue another coop cattle get pods dash a command, we should see all our pods are now in a running state and we have a fully functional Kubernetes cluster. As promised, these are the links for all the commands that I've used in this presentation. Don't be shy to contact me if something goes wrong. I'm reachable at these social places and Calico users slack. Well, that's it for this presentation. I hope you have enjoyed it and I'd like to thank you for viewing.