 Hello. Thanks for taking the time to watch the demo. In the following, I'm going to run some out-of-the-box benchmarks against several ingress controllers. First, a little about me. My name is Daniel Corbett, and I'm the Director of Product at HA Proxy Technologies. I'm a security nerd and a cheese enthusiast. Benchmarking is a science in and of itself, but many users do not have the time required to perform head-to-head tests against a wide range of ingress controllers that are available. Understanding how your ingress controller in Proxy performs is important as poor performance can cause you to use excessive resources, which ultimately translates to lost money, especially in a cloud-native world. It can also lead to a suboptimal user experience for your end clients. However, when it comes to tuning various pieces of the infrastructure, many users get lost in the abyss of information online or just do not even think it's something they need to do. This means that many users end up running out-of-the-box configurations. This demo is going to focus explicitly on those out-of-the-box configurations to find out which ingress controller performs the best in a new setup. I've included a link to the full architecture setup guide and code to reproduce these benchmarks within the HA Proxy Technologies GitHub located on the URL at the top of this slide. For this demo, I've configured a 6-node Kubernetes cluster running on an AWS C5 Xlarge instance. One of the nodes has been dedicated specifically to the ingress controllers. The other five are where we will run our traffic injectors. I have five pods running a standard echo application in which the ingress controllers will be monitoring for changes in routing traffic to. In these tests, I'm testing Envoy with Contour as the ingress controller, HA Proxy, two different Nginx ingress controllers, and traffic. For each of these, I will be running the latest versions at the time of this benchmark which are shown in the table. You may be wondering why I have two different Nginx ingress controllers listed. This is because there are two completely different open source projects. One is a Kubernetes community-driven project, and the other is directly from Nginx Inc. The biggest difference is the Kubernetes community-driven project has a very complex configuration and harnesses Lua for many of the dynamic capabilities, while the Nginx Inc one is a bit more slimmed down and does not use Lua. From here on out, I'll refer to the Kubernetes-driven project as Nginx and the other as Nginx Inc. For the actual tests, I will use the tool HA. I will run two different tests. In the first test, I will maintain 250 concurrent workers from one traffic-injecting pod. The second test will run 50 concurrent workers from five traffic-injecting pods. Each test will run for a total of 360 seconds. During this time, I will scale the pods up to 7 and back down to 5 with 30-second intervals in between scale-outs. I will do this three times in a row, and then I will make several changes to the Ingress controllers themselves, such as adding and removing course headers and adding and removing path rewrite rules. I will also sleep in between each change for 30 seconds. At the end of the benchmarking, we will graph the average request per second, the 75th, 95th, and 99th latency percentiles, and user-level CPU usage. We'll also graph the number of HTTP error codes that we receive. Let's get the benchmark started. I'll be running a shell script that will handle running the benchmark as well as collecting and graphing the data. During the tests, I'll also keep a Grafana dashboard open showing the activity to each of the servers in the graph data. If we look at the first, we're able to see that HA Proxy came out on top, averaging approximately 26,000 requests per second, with Envoy in second place at just under 16,000. We can see that Nginx came in last with around 10,000 connections per second. If we look at the latency percentiles, we can see that HA Proxy had the lowest 75th, 95th, and 99th percentiles, with Envoy in traffic neck and neck for second place. Nginx Inc. came in at a close third, and Nginx came in last with a massive spike for the 99th percentile metrics. Looking over at user-space CPU usage, we're able to see that HA Proxy maxed out roughly around 40% CPU. Nginx Inc. came in close second at 47%, and Nginx came in third with 63%. Both Envoy and traffic came in at around 70%. We also have a graph showing us how many HTTP errors were produced by each proxy. It appears that neither HA Proxy nor Nginx Inc. produced any HTTP errors, but traffic produced a little over 300 502 errors, Nginx produced 28 502 errors, and Envoy produced 23 503 errors. Now, let's go ahead and start our second test. This time, we'll run 50 concurrent workers from 5 different traffic injecting pods and let it run for another 360 seconds. So the second test has completed. Let's take a look at the results. This time we can see the average request for HA Proxy is about 37,000, which is about 10,000 higher than the last test. And Envoy again is at second with roughly around 17,000 requests per second, and Nginx Inc. came in at third with around 15,000 requests per second. Looking at the latency percentile graphs, we see that HA Proxy is again the lowest across the board for the 75th, 95th, and 99th percentile. Envoy comes in second, and Nginx Inc. and traffic are neck and neck for third. However, you'll note there are some really drastic spikes in the 95th and 99th percentile for Nginx. In the user space CPU usage graphs, we see that Nginx Inc. and HA Proxy are neck and neck with just under 50% CPU. Envoy came in around 72%, and traffic came in at 77%. What's important to note is the significant difference in the request rate of HA Proxy compared to the others, and yet it was able to maintain a consistently low CPU usage. Ultimately, resources cost money. So if you have an Ingress Controller and Proxy that are able to handle more than double of the closest alternative and do so with 30 to 40% CPU, you're definitely going to save money. Looking down at the error rate, we could see that neither HA Proxy nor Nginx Inc. produced any HTTP errors. Traffic produced approximately 630 502 errors, Nginx produced 27 502 errors and 49 504 errors, and Envoy produced 14 503 errors. That concludes the demo. HA Proxy has a reputation for being the world's fastest and most widely used software load balancer in the world for a reason. Give it a try today, and you'll be certain that it's out of the box. Performance will supercharge your Kubernetes environment. Thanks for watching.