 Hello, thanks for joining. In the following, I'm going to give you five reasons to rethink your default Ingress controller. First, a little about me. My name is Daniel Corbett and I'm the director of product at HA Proxy Technologies. The general reasons for reconsidering your usage of the default Ingress controller fall into the following categories, performance, reloads, health checks, observability, and overload protection. Let's start with performance. In a cloud native world, performance is more important than it probably has ever been, as poor performance requires excessive resource usage, which results in a higher cloud spend. It can also translate to a suboptimal user experience. The HA Proxy Kubernetes Ingress controller was built to supercharge a Kubernetes environment. Out of the box benchmarks show that it significantly outperforms the default Ingress controller, which relies heavily on Lua scripts. It's able to not only serve a little over 30,000 requests per second more, but it does so with approximately 20% less CPU and a lower latency. Ultimately, this means that out of the box, you will instantly be able to handle more requests per second with a smaller instance type and deliver your application faster to your clients. Here are the graphs from that benchmark. You can watch the full video in the demo theater or at the HA Proxy booth, which is located in the Platinum Hall. You can also find the code to reproduce the benchmarks yourself in the GitHub link shown here. Cloud native environments are constantly changing. New Ingress resources are added, routing rules are added or removed, and secrets are frequently updated. The default Ingress controller does not have a runtime API, and so it requires a reload for each of these changes. It was found during the previous benchmarks that making changes which force a reload while running the benchmarks results in server-side errors being sent to the client. The HA Proxy Kubernetes Ingress controller does have a runtime API, which allows the changes to be made on the fly without reloading. For those changes, which do require a reload, it supports hitless reloads, which means that no traffic will be dropped. This gives you peace of mind that your end clients are not affected by changes within your Kubernetes environment. Active health checking is a key component to finding application issues early and avoiding routing traffic to an unhealthy pod. The default Ingress controller does not support active health checks and relies on Kubernetes' liveness and readiness mechanisms for monitoring pod health. The HA Proxy Kubernetes Ingress controller does support active health checking, and it can be configured to use custom methods such as head, get, or options, and supports sending a custom path on a per-Ingress basis. It also provides flexibility, allowing you to define the interval at which health checks are performed so that unhealthy pods are removed from load balancing sooner than later. We all know that logs and metrics are a key component for debugging and spotting anonymous in your application. Unfortunately, the default Ingress controller provides a minimal amount of observability data with the sparse amount of information available in the logs, limited metrics, and it makes it difficult to even determine what application servers it's currently routing traffic to, requiring you to install a cube CTL plugin to get that info. The HA Proxy Kubernetes Ingress controller provides robust metrics and logs, and supports cloud native logging by allowing you to output logs to STD out, but also supports routing them to a syslog server. The logs contain detailed information, such as end-to-end timing data on the request, and a session state termination code, allowing you to quickly determine whether a connection was completed successfully or not. Finally, it has a detailed stats page, giving you an in-depth view into your application pods that are configured, as well as the client sessions that are being routed. The stats page comes enabled out of the box. Here's a screenshot. And the last reason that you should rethink your use of the default Ingress controller is overload protection. Spikes in traffic can cause pod overloads and potentially lead to a snowball effect. The HA Proxy Kubernetes Ingress controller allows you to specify maximum connection settings on a per pod basis. When the maximum connection limit is reached, the new connections are placed into a queue and served as soon as there is an available connection slot. Thanks for joining this talk, and I hope I've given you some good reasons to rethink your usage of the default Ingress controller. If you're interested to try out the HA Proxy Kubernetes Ingress controller, you can quickly get started with its home chart. If you have any questions, stop by the HA Proxy booth located in the Platinum Hall and we'll be happy to answer them.