 Hello everyone. Welcome to Cloud Native Live where we dive into the code behind Cloud Native. I'm Annie Talastro and I'm a CNCF ambassador as well as a senior product marketing manager at Camunda and I will be your host tonight. So every week we bring a new set of presenters to showcase how to work with Cloud Native Technologies. They will build things, they will break things and they will answer all of your questions. So join us every Wednesday to watch live as you are maybe doing now. Welcome very much. So this week we have Reza here with us to talk about multi architectural Kubernetes clusters. And as always this is an official live stream of the CNCF and as such it is subject to the CNCF code of conduct. So please do not add anything to the chat or questions that would be in violation of that code of conduct. Basically please be respectful of all of your fellow participants and presenters. So with that I'll hand it over to Reza to kick today's presentations off. Hello and thank you for the warm welcome. Let me just share my screen without disabling my webcam. Yeah. So today I'm going to talk about multi architectural Kubernetes clusters. Hello. As Annie was so kind enough to introduce me my name is Reza. I'm a developer advocate at TIGERA. TIGERA is the company behind the open source project Calico. A little bit about me. I like staring at binary files until I can spot decimal numbers in them. I'm always eager to learn new stuff and open to suggestions. So let's connect and exchange ideas. By the end of this talk you'll be able to convert your X86 containers to run in an ARM environment and set up Kubernetes cluster with multiple CPU architecture that can run those workloads that you converted. This will help you spend less money on cloud providers while gaining better performance. This talk is divided into six sections and I'm going to talk about Project Calico and give you an overview of what it is and what we do. Then I'm going to explain what I mean by a multi architectural cluster. There will be some promising benchmarks about Redis and Nginx that I've done before. Then I'll demo how to set up a multi architectural cluster using Amazon EKS. After that we will check out how easy it is to prepare your applications for an ARM environment. Finally if everything goes right I'll demo the migration process. If you got any questions please type it in chat. I'll try my best to answer it at the end of each section or at the end of presentation. If I don't know the answer to your question which by my experience is going to be a lot of times please remind me in Project Calico Slack and I'll try to connect you with people who know way more than me that can help you with that. So an overview about Calico. Now what is Project Calico? Project Calico is a community behind a pure layer 3 approach to virtual networking and highly scalable data centers. By layer 3 I mean IP and routing. Calico is an open source networking and network security solution for containers, virtual machines and native host based workloads. It is important to note that Calico is not just a community CNI. In fact Calico supports a broad range of platforms including OpenShift Merentus, OpenStack and bare metal services. Project Calico is an active community about cloud networking and security. So feel free to join our community using these social networking handles and drive the conversation where you see a need for a change or seek help for your Calico journey from developers who are actively working on the project. All right so let's get started. What is multi-architectural cluster? Before we can talk about that let's talk about what are the benefits and why should we care. In every project there are a variety of workloads. These workloads can run more efficiently by using different processors which will result in cost saving and performance boost. As an example if you're using an in-memory database like memcached or Redis you will achieve a better performance and have to pay less money using an ARM architecture rather than x86. Now that we know the benefits and hopefully saving money is an intriguing fact for you let's talk about what I mean by multi-architectural cluster. When participating nodes in a cluster have different CPU architectures we have a multi-architectural cluster. Usually when we create a Kubernetes cluster we use an Intel or AMD CPU which is based on x86 or AMD 64 architecture. We can verify this in the first picture. Both nodes are based on AMD 64 architecture. In a multi-architectural cluster we use nodes that have different processors allowing us to divide the workload based on their processing needs. If you look at the second picture where I'm going to draw a circle an ARM 64 node is participating in this cluster. By the way AMD 64 refers to the 64 bits nature of the processor and it's not a brand. This is a historical thing because in race to 64-bit AMD was the first to achieve it and we still refer to them as AMD 64. Now what is ARM? ARM is a family of processors running on the risk architecture. RISC is referred to processors that use a few amount of highly optimized instructions to do a task very quickly. Fun fact ARM is a chip company that doesn't mass produce chips. Their business model is to provide licenses to other companies, allowing them to design their own custom build processors using ARM's patented technology. What is the difference between ARM and x86? The main difference can be traced back to the way DCPUs execute instructions. x86 tries to achieve more calculations by running one instruction and firing a chain of other instruction to have a more dynamic approach to computing. This is because when x86 was being developed memory was quite slow and expensive. ARM on the other hand uses a simple instruction to do one thing and one thing only making it more efficient in some scenarios and less power hungry. Now I'm not going to pretend that I'm an expert in these two architectures in any way but if you're interested to know more there's a link at the end of this presentation that will assist you in your computer architecture journey. So who uses ARM? Wherever power efficiency is needed ARM shines. Smartphones are a great example of this. Most smartphones and tablets are using customized CPUs that are based on ARM design. Many laptop manufacturers are also migrating to ARM architecture because of its power efficiency. Apple's newest line of processors and one family is arguably one of the most known examples of that at this time. Another area that ARM processors are used is supercomputers. Fugaku, the word fastest supercomputer at the moment, is powered by ARM processors. While all these areas are interesting on their own, I'm only going to talk about ARM based processors in the cloud. So forgiving. What about the cloud? Amazon launched their custom design processors at Graviton back in 2018, allowing users to choose ARM 64 architecture in cloud using a one general purpose EC2 instances. In 2019, Amazon introduced Graviton 2 and upgraded their last gen CPUs, providing a variety of instances, which are called M6 family with better price to performance ratio. So what this got to do with costing? This is an estimate generated by AWS estimation tool. As you can see, an M6G ARM instance at the top of the picture with the same amount of resources is 20% cheaper than M5 x86 instance at the bottom. That is the money incentive that I was talking about, but let's check out some benchmarks to verify the performance moves. All right, so these benchmarks are created using M6G ARM instance and M5 x86 instance that are introduced in the previous slide. It is worth mentioning that except the CPU and price, everything else is identical inside these instances. Among the operations that Redis can do, get and set are two examples of ARM 64 performance boost that we can refer to. As you can see in the left picture, in the same amount of time, Redis on an ARM instance can run more than 113,000 set of operations while an x86 can only do around 6,600 operations. Same is true for get. As it is shown in the right picture, ARM 64 can run more operations than x86 in the same amount of time. In fact, Redis performance gain is not just limited to get and set. All Redis operations gain a considerable performance boost when we use an ARM 64 instance in comparison to an x86 instance, as it is shown in this slide. All right, at this point, I feel like I owe a brief description of what is an in-memory database. Traditionally, databases use storage to store data. Up in request, they transfer that portion of information into RAM and after a query is done, they might leave it inside the RAM to be used as some sort of cache and speed up the next lookup process. In-memory databases, on the other hand, you use the RAM to store the data. When a query is issued, it is directly retrieved from the RAM, resulting in an eye-catching performance boost. In-memory databases can use the storage as well, but it is mostly used to create snapshots as a form of backup, since information in RAM are not persistent and can be purged with a pulverage. All right, so the next project. As it is described on NGINX websites, NGINX is an open source software for web-serving, reverse proxying, media streaming, you name it. NGINX is quite popular these days. As a matter of fact, NetCraft, a website dedicated to analyzing many aspects of the internet, including the market share of web servers showing NGINX as the leader of market share between all sites among web server developers, as you can see in their chart for 2021. This is my benchmark of NGINX using, again, X86 and ARM64 nodes. Again, except the CPU architecture and the price, everything else is identical inside these instances. As it is shown in the picture, X86 falls short with around 6 million requests per second, while ARM64 is able to reach 8 million in the same timeframe. If there is no question, we could get to the demo. Yeah, not so far, but leave all of your questions to the chat so we can get them at the end of the presentation. And if no one else asks anything, I will have plenty of questions, so no worries. Oh, no. Okay. Not too hard or anything, no worries. Absolutely. So, for the demo, I had to prepare an X86 EKS cluster because it takes around 30 minutes to create one. If you're new to EKS, don't worry, I got you covered. There's a link at the end of this presentation, a step-by-step guide that can help you throughout the whole journey, even creating the benchmarks and running everything from scratch. So, at the moment, I only have one cluster, which, as I said, I created earlier. If we could check the nodes architecture at the moment, there is one AMD64 instance inside my cluster. Now, since I'm using EKS Cattle, I have to run some other EKS Cattle that can add an ARM64 node group to my cluster. This is a basic EKS Cattle create node, which we just say create a node group Ruby for this cluster, as you can see. With this name, I want one node inside my node group, and the node type is M6G, which I talked about earlier. And M6G is a large instance, giving it like 8 gigahertz. Now, if you run this command inside EKS Cattle version 0.65 to 72, you will get an error, which I'm going to show you. And the error talks about the manifests that EKS Cattle is using. This is because most of the manifests are not multi-platform friendly in those releases of EKS Cattle. In order to update these manifests, you just need to run a simple command that tells EKS Cattle that I like to update my core DNS, AWS node, and coop proxy manifests. Again, if you like to know the commands that I'm running, don't worry about it. There is a link at the end that shows everything. All right. So when you are done updating these three components, you can run the node group command to create an ARM64 node group. By the way, the thing that this command is doing is adding an ARM64 to each manifest, allowing it to match the architecture of ARM64, which is the architecture of the node that we are going to create. In EKS Cattle, I think it's 0.65 or 66. When you do the coop proxy update, you will still get an error, which you can fix it by getting rid of the coop proxy demon sets off the getter and asking EKS Cattle to force create the add-on. Hopefully, when you done all of these, things will work and you can create your node group without any problems. Yeah. All right. So let's get back to the presentation. Now, while the EKS Cattle is busy with AWS cloud formation, let's talk about how to prepare applications for an ARM environment. Adopting ARM might seem complicated at first because of the huge toll that might be involved with preparing an application for an ARM environment. However, with more and more programming languages leaning toward multi-platform and cross-compilation support, this gap is getting narrower every day. Unless you have a project coded in assembly language or a heavily optimized C code for a specific architecture environment, there's a huge chance that converting your containers will just take one command. Talk about converting. Linux containers are a great example when it comes to converting containers. Linux can be configured to use an optional kernel feature called binfmt miscellaneous that can match the beginning of a binary file and identify which interpreter is suitable in order to execute it. This can help to compile your container for a different architecture without investing in hardware resources. This is needed because inside Linux, when you create a container, it shares the host kernel, and host kernel is usually using the CTO architecture, not usually every time. So with using binfmt miscellaneous, we can emulate that architecture, which is suitable for our application. Project multi-arc, which I'm borrowing their logo, it uses the same concept to emulate various CPUs inside Linux containers. Now, let's check out how we can convert Google microservices demo to run an ARM environment. For any of you who are not familiar, Google microservices demo or online boutique is a fictional web-based shop where you can search for items you love, put them in your basket and check out without spending any real money. More importantly, it is a project with 11 microservices written in a variety of programming languages, begging to prove that I might be wrong about one command conversion claim that I made earlier. So let's head back to the demo. Hopefully, you know, CloudFormation is going strong. Right, so this is the microservices folder. If you like to download microservices, it is available in GitHub. You can clone it, download it, there are many, many documents and tutorials about how it works and what are the components in it. As I said, the important thing for me in this presentation is the amount of languages that is included inside this project, which makes it more or less a real-world scenario. So inside microservices demo, there is a folder called SRC. And these are the source files for creating these containers that are components inside the demo application. So microservices, SRC, and let's pick first one at service. Right, so I'm using Docker. And in order to build, we can check out there is a Docker file indicating that this container can be built with Docker build command. So usually, we should be able to build this using Docker build and the dot, we can add a tag to, we should later, oops, all right. What we can do to create this container is just run Docker build normally. For creating it in terms of R, we can use the billix, which is they're multi-treaded with more caching flexibility toolchain. And we can add platform, which could indicate what are the architectures that we want to run this container on. Now I'm going to run this an arm64. By the way, when you run billix, it doesn't export the container at the end to your local repository. You either need to add dash dash load or dash dash push to push it into the Docker repository, given that you are logged in inside Docker. So with that, we could run this command and it will give us an error because it seems something wrong. And it's this story Linux arm64. Yeah. Right. And that's the arm services on its glory. And I can say create this for both AMD 64 and arm64, and push it into my Docker repository on the internet, which hopefully runs there will be no error. While this is building, I can try to pull up, should be able to. Yep. So CNCF live few seconds ago. And if we check the latest tag, there is an arm64 and AMD 64 image for my container. Now let's go back and right. This is finally done. We could go to the nodes. And I have an AMD 64 and arm64 in my cluster. Fun fact, for some reason, EKS butter rocket version is mismatched in AMD 64 and arm64. There is one minor version difference between these two releases. So if you're using a cutting edge feature, make sure that the kernel got it. And with that, we should be able to go around services, which will populate pot. This will take a second. And populate the pots, which will be back to check it later. Meanwhile, you can get back to the presentation. Hopefully there is no question. And so far. So I talked about how to run Kubernetes cluster with multiple CPR architectures and how to migrate your containers to work on arm. I strongly believe that you can save a lot of money and resources if you start converting some of your workloads to run an arm. If you like to create the demo cluster and take it for a spin, check out my GitHub repository. Everything is there. You can see the address in the top of the slide. And don't be shy to contact me if something goes wrong. I'm reachable at these social places and calico users like if you like to shout at me. These are the resources that I used for this presentation. Again, as I promised, there is the entrance to the arm rabbit hole at the end of this slide. As you can see, developer arm.com. If you're a big fan of free stuff like me, make sure to grab your copy of the Kubernetes security and observability ebook from our website. It describes the key concepts, the best strategy and technology choices available to support your environment. In fact, there is a chapter dedicated to workload deployment controls that goes into more details about container images and CICD that I've used as an inspiration for my how to create arm containers. And some of the choices that I made to do the arm 64 migration question. Perfect. Thank you so much for the great presentation. If there's any questions from the audience now is the perfect time to ask them. So fire away with those. And I find it correctly. If anyone has a question after this, you know, they realize that they should have asked that or this. I think you said that they can go to their colleague to ask more questions. Yes, that's correct. That's always trying to load. These two are still trying. Everything takes always longer when you're like in a presentation. That is true. And things are like always running into issues like dash tag. I don't even know how that get in there. But all right, there's a dash tag. Yeah, it's the demo effect. It happens every single time. Yeah, that is true. Yeah, and thank you to J. J. T. Han from before J. T. For your comments on great and thank you. So as I said, amazing presentation. Thank you so much. So questions for a minute, everyone, anyone, while we see if any of them rolls in. I want to ask, so this was a great presentation into the current things that's happening with Project Caligar. So what is the future do? Is there any kind of future roadmap or what's gone and what's going to be next for their project? So Project Caligar is at the moment trying to work on eBPF. And eBPF is, as you know, usually used in AMD64. So with this ARM64 movement that is happening inside cloud, we're hoping to like expand the horizon for people who like to spend less money and use features like eBPF and stuff in their clusters. Yeah. Perfect. Yeah. And thanks, Jason again, for commenting. And they don't have any questions for now. But as mentioned before, hop on over to the Slack if you have any later on. Another question for me. So you showed a few content pieces there that everyone can learn more on if you could comment only one content piece, but what would be the kind of the perfect next step after this presentation? What should people start with? I think the book that I pitched in could be a great resource to know about everything that I talked about and much more. If you're not big on reading books, I would say Linux kernel is the best way to consume knowledge. It's something that I really like whenever I have some free time, I just go to the Linux kernel website and just browse and find new information. Yeah, those are the best tips I always myself, if I have, if I'm in a need to learn something new, I usually just go to the CNCF YouTube and just like watch a professional or something. And these these kind of tips, I think are always the best for sure. Yeah, um, I think if there isn't, so I think these is last call ish for questions. Do you have any other like wrap up words or comments for now? Oh, I almost forgot the most important part. Thank you for giving the opportunity to bore you with the things that are interesting to me. Of course, not boring at all. But the interesting part is the best. Yeah, and there's Wiz saying amazing. Thank you very much. No question from them either. But great that everyone had a good time while in the session. And yeah, so let's start wrapping it up. And if anyone has the questions later, just hop on to the Slack side of things to ask them. So let's wrap it up. So thank you everyone for joining the latest episode of cognitive live. It was amazing to have Riza here talking about multi architectural communities, clusters. Thank you so much for the great comments and great feedback for the session as well as today's show. We really appreciate it. And tune in next week as well when we bring you the latest cloud native code every Wednesday. Next week we will have Jason Morgan talking about introduction to policy with linkerd 2.11. Thanks for joining us today and see you next week.