 Hello, thanks for taking the time to watch the demo. In the following, I'm going to run some out-of-the-box benchmarks against several Ingress controllers. First, a little about me. My name is Daniel Corbett, and I am the Director of Product at HA Proxy Technologies. Benchmarking is a science in and of itself, but many users do not have the time required to perform head-to-head tests against the wide range of Ingress controllers that are available. Understanding how your Ingress controller and proxy performs is important as poor performance can cause you to use excessive resources, which ultimately translates to lost money, especially in a cloud-native world. It can also lead to a suboptimal user experience for your end clients. However, when it comes to tuning various pieces of the infrastructure, many users get lost in the abyss of information online, or just do not even think it's something they need to do. This means that many users end up running out of the box configurations. This demo is going to focus explicitly on those out-of-the-box configurations to find out which Ingress controller performs the best in a new setup. I've included a link to the full architecture setup guide and code to reproduce these benchmarks on the HA Proxy Technologies GitHub located on the URL at the top of this slide. For this demo, I've configured a six-node Kubernetes cluster running on C5 extra-large instances. One of the nodes has been dedicated specifically to the Ingress controllers. The other five are where we will run our traffic injectors. I have five pods running a standard echo application in which the Ingress controllers will be monitoring for changes and routing traffic to. In these tests, I'm testing with Envoy with the contours the Ingress controller, the HA Proxy Ingress controller, two different Nginx Ingress controllers, and traffic. For each of these, I will be running the latest versions at the time of this benchmark, which are shown in the table. You may be wondering why I have two different Nginx Ingress controllers listed. This is because there are two completely different open source projects. One is a Kubernetes community-driven project, and the other is directly from Nginx Inc. The biggest difference is the Kubernetes community-driven one has a very complex configuration and harnesses Lua for many of the dynamic capabilities, while the Nginx Inc. one is a bit more slimmed down and does not use Lua. From here on out, I'll refer to the Kubernetes community-driven project as Nginx, and the other as Nginx Inc. For the actual tests, I will use the tool Hey. During the test, I will maintain 50 concurrent workers from five traffic-injecting pods. Each Ingress controller will be benchmarked for a total of six minutes. Don't worry, I'll do some post-processing magic to skip through the boring parts. During this time, I will also scale the pods up to seven and back down to five with 30-second intervals in between scaleouts. This will happen approximately three times, and then I will make several changes to the Ingress controllers themselves, such as adding and removing course headers and adding and removing path rewrite rules. I will also issue a sleep in between each change for 30 seconds. At the end of the benchmarking, we will graph the average request per second, the 75th, 95th, and 99th latency percentiles, and user-level CPU usage. We'll also graph the number of HTTP error codes that we receive. Let's get the benchmark started. I'll be running a shell script that will handle running the benchmark, as well as collecting and graphing the data. Let's go ahead and start the benchmark script. The test is completed. Let's examine the graph data. If we look at the first chart, we're able to see that HA proxy came on top, averaging approximately 46,000 requests per second, with Envoy in second place at around 18,000. We can see that traffic came in last at around 12,000 requests per second. If we look at the latency percentiles, we can see that HA proxy had the lowest 75th, 95th, and 99th percentiles. In fact, its 99th percentile is half of Envoy's who came in that second. Traffic and NGINX ink were neck and neck for a close third, and we can see that NGINX came in last with a massive spike for the 99th percentile metrics. Looking over at user-space CPU usage, we're able to see that HA proxy and NGINX ink were neck and neck in CPU usage with 48% and 46% CPU usage respectively. Both Envoy and traffic came in around the mid 70% range. Now, let's look at how many HTTP errors were produced by each proxy. It appears that neither HA proxy nor NGINX ink produced any HTTP errors, but traffic produced a little over 800 502 errors, NGINX produced 50 502s and eight 504s, and Envoy produced 21 503 errors. That concludes our demo. HA proxy has a reputation for being the world's fastest and most widely used software load balancer in the world for a reason. Give it a try today and you'll be certain that it's out of the box configuration will supercharge your Kubernetes environment.