 Hello, everyone. Welcome to Cloud Native Live, where we dive into the code behind Cloud Native. I'm Annie Telvesto, and I'm a CNCF ambassador, as well as handling marketing and vision. I will be your host tonight. So every week, we bring a newsletter presenters to show guys how to work with Cloud Native technologies. They will build things, they will break things, and they will answer all of your questions. So you can join us every Wednesday to watch live. So this week, we have Reza here with us to talk about exploring communities Windows host posts installer. Very exciting. And as always, this is an official live stream of the CNCS, and as such, it is subject to the CNCF Code of Conduct. So please do not add anything to the chat or questions that would be in violation of that Code of Conduct. So basically, please be respectful of all of your fellow participants, as well as presenters. With that done, I'll hand it over to Reza to kick off today's presentation. Hello, everyone. So with that introduction, let's talk about host process installer. My name is Reza. I'm a developer advocate at Tigera, and Tigera is the company behind the open source project Calico. This webinar is divided into five sections. I'll start by giving you an overview of Project Calico. Then I'm going to briefly talk about hybrid clusters, Windows containers, Windows host process containers. And at the end, there is a hands-on workshop that I will share with you. But don't feel overwhelmed if you are new to Cloud and Cloud Journey. There is a slide at the end of this presentation, which gives you all the resources that you need in order to run the same thing or create the same environment, both locally or in the Cloud. If you've got any questions, please feel free to share it. And I'll try to answer it by the end of this webinar. But keep in mind that you can run the demo in your own browser when I shared a QR code instead of just watching me pleading to the demo gods for favors. So let's just start by checking out what is Project Calico. Project Calico is an active community about the Cloud networking and security. Feel free to join our community using these social networking handles and drive the conversation where you need, when you see a change, when you see a need for a change, oh boy, or seek help for your Calico journey from developers who are actively working on the project. And if you are already a community member, you might find our Calico Big Cat Ambassador program a very interesting next step. Now that we know where to find Calico, let's talk about what is Calico. Project Calico is a community behind a pure Layer 3 approach to virtual networking and security for highly scalable data centers. We offer Calico, a free and open source networking and network security solution for containers, virtual machines, and native host-based workloads. Calico supports multiple architectures and platforms, and it is designed to be modular. Its pluggable data plane approach offers EVPF and IP tables for Linux environments and host network service or HNS for Windows environments. This flexibility and its modular architecture makes Calico a great choice for any environment and gives you the required tools to be in charge of your software-defined networking journey. In fact, EVPF and HNS are some of the foundational technologies that provides networking, security, and observability in our enterprise solutions. OK, now that we have a better understanding about Calico, let's see if we can find it in a hybrid cluster. It is difficult to think about Kubernetes without talking about Linux, but Kubernetes supports a broad range of platforms. For example, Kubernetes officially supports Windows. And if you're now wondering, yes, you can containerize your Windows applications to run them at a scale by using the same tools and materials that you already are using for your Linux containers. But before jumping into transforming all your workloads into containers and the animal files, there are a few requirements that we need to discuss. First of all, Windows nodes can only be workers in a Kubernetes environment, which means that you will need a Linux control plane node to run the Kubernetes system applications in your cluster. You should also keep in mind that containerization is fairly new concept for Windows. So make sure you're using a recent copy of Windows, preferably 2019 or above. Another thing to consider is the version of your Kubernetes cluster. While the support for a hybrid environment is available in Kubernetes, version 121, in order to run host process installer, you will need to run version 1.23 or higher. You will also need a capable CNI to provide networking and security features. Since Linux and Windows applications are not compatible and each requires a different environment, it is important to choose a CNI that can run natively on both platforms. OK, now let's talk about Windows workloads. Pause easy. Linux and Windows containers are very similar. For example, you can run a Windows container in both on-prem or cloud environments, which will allow you to create an agile development environment in your enterprise, enterprise locally or deploy your application at a scale in the cloud. Windows containers can be lightweight, which can help you to minimize the attack surface by removing unnecessary libraries from your production environment. And like Linux, they offer process isolation, which can efficiently divide your hardware resources and save a lot of costs for you and your company. And if you're wondering why there are stars in the slide, well, now we're getting to it. Since Linux and Windows operating systems are different on a fundamental level, some of the capabilities that we take for granted in a Linux environment can be a bit more complicated for Windows containers to achieve. For example, Windows-based images range from full implementation of Windows APIs and services to a minimal version with a small footprint. This is an important fact to consider since your cloud bills are directly affected by the amount of storage that you request from the cloud provider. On top of that, a container based on a huge image might take some time before it is fully downloaded and extracted in your Windows container runtime environment, which can delay the initial start of your workloads. Another thing to keep in mind when working with Windows containers is the kernel compatibility. Windows containers are highly dependent on the host kernel. So in the container build process, we have to be very cautious about choosing a base image that matches the underlying host, or the whole thing doesn't work, and you will see a lot of errors. Windows offers three methods of isolation, the Hyper-V method, the process isolation, and the new one, which is host process method. Kubernetes supports process isolation. In this mode, processes are run concurrently on a host in different isolated namespaces. This is very similar to how Linux establishes isolation if you're familiar with that concept. Host process is similar to process isolation, but containers are run directly on the host and can be created in the host's network namespace instead of their own. OK, now let's talk about host process containers. Running a host process container in Kubernetes is pretty easy. All you need to do is create a Windows container and add the required annotations to the security context. In terms of pros and cons, a pro is that since these containers run with direct access to the host, there will be no compatibility issues. And similar to Linux privileged containers, they can access and modify networking, file system, and et cetera, directly on the host, which makes host process perfect for installing and configuring the required components on a host level. In terms of cons, since these containers run at the host level, there will be limited restrictions and they can access and modify networking, file system, and et cetera at a host level. So please be careful. Underneath, Windows privileged containers are implemented with job objects. Just to make a note, job objects are different from Kubernetes jobs. These are internal Windows objects that are happening inside the Windows kernel, which is a break from previous container model of using service silos. So a silo is similar to a namespace, like an isolated place that you can stack your containers into it. A job object is a kernel object that can be used to manage and group processes on a Windows system. It provides a way to limit CPU, memory, and other resources that a group of processes can use. It's similar to C-Group, but kind of different, as well as control their process lifecycle. Silos are an extension of job objects. Their primary goal is to encapsulate as much of Windows user experience mode as it's required for an application or for a job. There are two types of silos, application silos and server silos. And other than being a fun game, whenever I mention silos, please think of it as, whenever I mention silos, please think of it as a server silo. All right, so now that we know the underlying technologies, let's put them on the board before we get to the demo part. First of all, container management components are part of the root silo in Windows. Host networking service or HNS and host compute system, or HCS, are the two components that are the two components that we will take a look at. The host compute system API provides the functionality to start and control both VMs and containers in Windows. After that, HNS is used to prepare the networking requirement. Keep in mind that in some networking cases, both HNS and HCS will need to work together in order to provide the functionality. Container D uses something called HCS Shem, which is a Go library to communicate with HCS and then HCS will invoke the CXSC service inside the container silo, which is an isolated part similar to Namaspaces. And for the networking part, container D communicates with HNS via your awesome CNR to handle the networking part. This usually happens when you create a pod in Kubernetes. Kublet orders the container D to create a container inside a container silo, which is fully isolated from the root silo. Now, with the host processes, if you decide to create a host process, then a job inside the root silo will run your container, allowing it to access your host resources. All right. I know about you, but this is more concepts that I can handle before my morning coffee. Is there any question before we get to the demo? Not so far, but if anyone has any, just send them the chat. Awesome. Thank you. So let's switch gears and try the demo. By the way, you can use this QR code to run the demo on your own system. And hopefully, I can find the demo as well. So the demo part is an interactive workshop. I'm going to share the URL side, the chat window. You can send it to the chat on the down below, and I'll send it to the audience. All right. But first, let me find a link. Yeah, that's the first. Yeah, which is the most difficult part. Yeah. All right. Awesome. All right. So while we're waiting for the demo to start, let's talk about what will happen. So in this demo, you will get a chance to create a hybrid cluster. There are two nodes, a Linux and a Windows node. You will read about the stuff that you can do in order to join these nodes together. After that, there will be an introduction to host process installer. You will install the CNI in your Linux and Windows node. Then you will use policies to secure your application. And yes, there will be a Windows application that you will deploy into your cluster in order to get a better understanding of how these things works. If we get to the point of deploying the application, I will show you the CSXSC service, which happens in a container silo. If you're doing this inside a local environment, you can also use sysinternals application called winobject, which you can download directly from Microsoft's website. So this will actually give you an insight into the silos. What you can do is you can open up your winobject program and go to the slash directory or the root directory. In there, you can find globals. And under globals, you will find each one of the silos that are created for your Windows node. It seems like 50 seconds still remaining. It always takes a bit longer when you're alive with the demo. Yes, that is true. And I was thinking about starting it beforehand, but somehow I forgot. So jokes on me, I guess. It happens. Oh, one more thing worth mentioning here is the actual. All right. So this one is the most important resource in order to understand Kubernetes and its way of interacting with Windows. So in this link, you will find host processes, why they were created, what is the motivation, like how they interact with everything, and what is the next step in terms of the cloud journey, cloud-active journey. All right, so let's start it. Now, as I said, there are two nodes here. One is Windows and one is Linux node. If we do a kube-catl nodes, there is only one node at the moment in our Kubernetes cluster. So what we need to do is use a kube-adm-join command. As you can see, the join command is already stored in the Windows node. All you need to do is run the join-pad, and it will hopefully add its to the cluster. All right, so everything works. But if you notice, our node one, which is a Linux node, is ready, but our Windows one is not ready. And this doesn't change no matter how much I refresh it. This is because our Windows node doesn't have a CNI at the moment. Now, to fix this, we need to go to the next module, which will take some seconds. All right, so in the next module, you again have both nodes. So let's start with the Linux node. First of all, because this workshop is using Calico, and Calico requires stricter rules when you are using Windows. This is because Windows nodes have some limitations in order to announce their IP address. Well, we can say limitations. It is more stricter in terms of what you can access in a Windows node, which will create a problem when you want to borrow IP addresses. So what we need to do is, first of all, disable the IP address borrowing mechanism of Calico. Then we need to hopefully read all the stuff that is written here. And after that, we can actually install the Calico by the manifest that is available here. Now, if you remember from the presentation, I talked about some annotations. Now, these annotations can be found here. So this manifest uses host process installer, and it represents itself as a host process to Kubernetes. Then it uses the username or the identity of anti-authority system to run a shell script or a PowerShell script inside the root silo. This will allow this container to transfer its content into the host system. Now, there is another recording in the CNCF that you can find, which is, again, me talking about securing Windows workloads. However, that is when we didn't have the host process installer technology available to us. So you can go and watch that and see how we needed to copy-paste all the binaries from one place to another in order to get it to work. Now, after this is done, hopefully, you will be able to see the roulette being done. And next, you can use kube proxy for Windows. Again, same concept. Oh, it's Linux. So we have to do root kube proxy. So again, for the kube proxy, it will be the same concept. It's using the security context and Windows options in order to represent this deployment as a host process. After these two steps are done, you should be able to see your cluster, and both nodes will be ready. Next, you will be prompted to actually deploy a Windows workload and secure it. We do have time, right? Yeah, there's still a good time. Great. So in this part, you get to create a name of space, on stuff. It talks about compatibility. So if you remember, I talked about Windows compatibility in the kernel. So here, my Windows is using version 1809. So the images that I need to create, the images that I want to deploy on this system needs to use the 1809 kernel. If you'd like to know more about it, there is a lot of information here, but that is not important at the moment. What we need to do is to deploy the workload. Now, if you look closely to the workload, you can see this is actually an 1809, sorry, this is actually tagged as image that is built with the 1809 kernel. And there is also a node selector which ensures that this deployment will only happen on our Windows node. So if I come here and do get, you will see the Win Web Container running. Now, what I need to do after is to create a service because my pod exposes a web port, which I can then access by using the Web UI tab. Now, all I need to do is wait for the internet to act as how it's supposed to. This is going to be fun. All right, internet. All right, for some reason, this container is thinking it doesn't have internet. And what we can do to containers that are the adamant about being wrong is we can delete them. Have to wait for the container. I have no idea why this is not connecting to the internet, but as I said, an online demo usually doesn't work. Anyway, you can use it in your own browser, and hopefully this will work for you. It will talk about how you can actually secure this workload and what needs to happen. At the end, there are social handles and places that you can come and shout if something like this doesn't work. So somebody like me would go and fix it. Now, what is the demo part? There's a comment from the audience. Could network policy be a restricting internet access? Yes, you can use network policies to restrict internet access. In fact, if you go to project calico documentation, which is docs.tigerra.to, there is a page talking about the same thing. It's called default deny. It allows you to write a policy to restrict internet working access. I think they might have also suggested that as a possible issue with the demo. I don't think so, because we didn't install it, or I did. I don't remember. I have to go check the video. Anyway, please check out my GitHub repository link. It's at the top of this slide. I usually post my findings in there. And don't be shy to contact me if something goes wrong, like the demo. I'm reachable at calico users stack and these social places. And oh, and this is the QR code for the previous installation method. If you fancy a journey into Windows and copy pasting everything by yourself, feel free to watch it. As promised, these are the resources that I used to appear in this presentation and act like I know these sort of stuff. If you like to explore Windows containers, I highly recommend checking these places. And that's it for this webinar. Great. If anyone has any questions, now is the time to send them in so that we can get them answered. So far, no questions from the audience, but there was a thank you for all the links, because I think you're also very happy to receive those. And then people were also saying that demos all the time these things happen. But I like the fact that if something goes wrong with the audience, they know that they can contact you as well as there goes. But while we see if anyone's going to write in a question, I would have a question. So can you give us any kind of information or sneak peek or what's the kind of future plans for a colleague or a project in general? I've heard that there's going to be EVPF in Windows, but no timelines. I'm assuming this will be a huge thing, because we already offer EVPF in Linux, which had a very good performance boost. And I'm assuming if it happens in the Windows environment, then people have more reasons to actually use a Windows container environment. Yeah, sounds really good. And then Paul says, thank you so much. This was very helpful, so that's very nice to hear. And since we are not getting any immediate audience questions, I think everyone was very clear, which is always nice to see. Do you have any final things that you want to kind of mention to the audience or anything else that you want to highlight? The only thing that I would like to mention is docs.tigeret.io, everything that you will need in order to secure your cluster or deploy a cluster with networking and security is there. Great. And then Paul wants to know, is Calico a kind of open source technology? It's an open source project. It uses a lot of open source technologies to deliver security and networking. Great. And then Emmanuel asks, will you be at the Tigera booth at KubeCon.io? I will not be. I would like to be in Amsterdam, but unfortunately, I will not be. But my colleagues will be there who know more than me. And their demos always works. So check out our booth. I think we're at booth 28. Where is it? Yes, it is booth 28, if I'm not mistaken. Nice. Everyone can find great people who are there then. Good. Perfect. S28. Good. The audience is giving us. It is 28. Check the 28. Yeah. Perfect. Thank you from the audience as well, so does everyone, and fantastic and so forth. So great. But I guess that's it as far as the questions go and everything so we can start wrapping up. It's been really nice and great to see the audience interacting as well. That's always great. But as always, thank you everyone for joining the latest episode of Cloud Native Live today. It was great to have a session about exploring communities, windows, host, poses, installer. And I really love the audience interaction and questions from the audience as well. And as always, we bring you the latest Cloud Native Code every Wednesday. In the coming weeks, we do have more great sessions coming up, but next week, we will not have a cognitive live since it's KubeCon week, so everyone is joining there obviously. But as always, thank you for joining us today and we'll see you in the coming weeks.