 Well, hello, and welcome everybody to another OpenShift Commons briefing this week. We're really lucky to have with us Chris Stenson from NGINX. And he's going to be talking about implementing NGINX microservice architectures with OpenShift. And I'm going to let him introduce himself. The format for this today is that we'll let him do his presentation and demo and explain how it all works. And then we'll have Q&A at the end of it. So without too much further ado, Chris, go right ahead and take it away. Okay, great. Thanks Diane. So welcome to the briefing on implementing NGINX microservice architectures with OpenShift. I'm Chris Stetson. And if my slides would advance, we might be able to see a picture of me. There we go. I'm Chris Stetson. I'm the chief architect here at NGINX. I am in charge of microservices and building out our microservices products and functionality. I will tell you that today is the day after our holiday party. So if you hear my voice cracking, you'll know why. It was very loud. There was lots of dancing. And I ended up having to shout in order to be heard in the many conversations I had. So that's the one caveat for this presentation that I want to give before we get too far into it. So just to give a little background about me. I've been a developer and architect of web applications for the past 20 years or so. I've been building large scale websites that many of you are probably familiar with. I built the first version of Serious Satellite Radio. I built Visa.com for many years. I built large parts of Intel.com, Microsoft.com, as well as websites like Lexis.com. I've been building large monolithic applications, service-oriented architectures, and most recently I've been building out microservice-based systems. So what we're going to be talking about today is very much around what microservices mean and how Nginx can help you build out a microservice application. I'm going to talk a little bit about our history with Red Hat and OpenShift. I think it is a relevant topic because we've watched the OpenShift evolution and where it's come, and we're very excited about it. Particularly, the latest version of it is really solid and has a lot of the features that we've been looking for in a platform. So that was very exciting. Hello. Is somebody talking there? All right. I'm going to keep going. A little bit of history. I'm also going to talk about the major shift in architecture that microservices brings to the table. How you need to think about applications differently in a microservice context than you do in a monolithic context. And what kind of issues that introduces. It definitely brings a lot of benefits, but there are things that you have to tackle in terms of building out your applications. Namely, how do you deal with service discovery? How do you deal with resource management in the context of microservices? That means load balancing. And how do you build a fast and secure network architecture to allow your application to communicate with itself? Then I'm going to go into the architectures themselves. And then finally, we'll touch on some of the issues that you get with some of the architectures. So there'll be a whole discussion around all of that. And then at the end, obviously, we'll be talking about answering your question. So hopefully that all makes sense. Let's dive in. All right. A bit of history. Red Hat has been delivering on the microservices platform for a while. We worked with a very early version of OpenShift when it was using proprietary cartridges. And we could see the value that, you know, that format was bringing in the kind of a value that microservices delivered. And, you know, we even worked on an Nginx cartridge for the early version of OpenShift. Last year, we ported our reference architecture that we've been building here at Nginx onto OpenShift 3. And we were actually very impressed with the system and how it delivered on a lot of the management features and gave a real context to how to put together a microservice application. And we liked the fact that it was really built around Kubernetes. But more importantly, we were very impressed with the vision that it articulated and the way that even if, you know, a number of the features were kind of held back by legacy issues, we could see that it was, you know, the very beginning of the journey and really that the vision was all there. And with the OpenShift 3.3, we feel like it really delivers on the vision. It's a very clean implementation of the core componentry. It's got a very robust security model, which is really nice. Particularly for enterprise customers, that's a critical feature and being able to manage that very specifically is good. It really fills the gaps where Docker and Kubernetes still have some loose areas. And it fully exposes the Kubernetes API in ways that we were able to take advantage of in order to implement our three architectures, specifically the proxy model, the router mesh, and the fabric model. So I will go into all of those in a little bit. But let's first talk about microservices and what that means in terms of architecture. So I call this the big shift. You know, the diagram that you see in front of you is a context diagram of the classic monolithic application. In this case, it's an Uber-like app. You have all of the functional components of your application, the passenger management, the billing, the notification payments, all of that running in a single VM on a single large host, communicating with all the components within that host using pointers or object references or some manner. And they all work together. Occasionally, they will reach out to other services like Twilio for notification purposes or Stripe for payment gateway. But for the most part, the entire application runs within that single host, within that single VM and manages all of the data and interconnectivity within that system. If you compare and contrast that to a microservice version of the application, you see that all of the components have shifted out from being on that single host to running in containers, all talking to each other via RESTful APIs and having that connectivity happen within the communication happen over an HTTP connection between the different services. There's a lot of benefits to this. I think it's important to reiterate the real benefits that you derive from microservices. Specifically, the boundary isolation that each of the components get, it's very clear where one bit of code stops and another bit of code starts. You also have the ability to very easily do deployments of core components of the application without having to redeploy the entire thing. So you could rev the passenger management component or the payments component without impacting the other components that are running your application. It also gives you an asymmetric scalability component and an allowance to do asymmetric scaling. So for example, if you had a surge of passengers, you could scale up your passenger management microservice very easily without having to impact the other parts of your application that aren't being utilized. Obviously, in a monolithic application, if you had a surge of passengers, you would have to scale up the entire application, which is a much bigger and harder thing to do. But it does introduce some challenges. We'll be talking about those in a little bit. Now, I do have a deep dark secret. And that is that I used to work at Microsoft and built and use .NET applications and built some very large applications for them. Specifically, I built their Microsoft's video publishing application called Showcase. It was a RESTful .NET monolith. We started out as a single monolith and decided that as it became more popular, we would shift it into a SOA-based architecture. So splitting out different components of the application and pushing them across the network and allowing them to communicate that way. And for the most part, that was pretty easy. One of the nice things about the .NET framework is that Visual Studio actually allows you to almost flip a switch and change your DLL calls to being RESTful API calls. And so we made that change. We were going through the process of refactoring the code. That took a couple of weeks, but it was not really that painful. And we were pretty surprised and happy with how things were going. And it was moving along really smoothly until we put our system onto our staging site where we had actual client data running on the... or production data running on the staging server. And suddenly, our most popular pages, pages that were hosting videos like the Microsoft Word tips and tricks videos, those pages were suddenly taking over a minute to render. In the past, they had taken four to five seconds to render. Now they were taking over a minute. And we were dumbfounded and really concerned about this and realized we could not push to production with that kind of performance. As we dug into it, what we discovered is that the community server that we were using, Telegent Community was the name of the system, was doing something that was causing the system to run really slowly. It said that it was RESTful and that it used RESTful API protocols. But literally what they'd done is simply use the switch in Visual Studio and not really optimize the system at all. And what we discovered was that for our most popular pages, where we had literally thousands of comments and thousands of users who were talking about and discussing the video that we were delivering for Microsoft, that those pages were having significant problems because of all the RESTful loops that they were doing. What we discovered was that in the comments, the pages were being rendered with user IDs. And those user IDs would have to be populated by a loop that would go through, take the user ID, call back to the user manager, which was on another server, populate the ID, and then iterate through the entire page. And where we had thousands of comments on a page, which we did for our most popular pages, the system would go through a tight loop and populate across the network and populate the system. And that was what was causing our one to two-minute rendering time to occur. We did a lot of work to mitigate that problem. We grouped the requests, we cached the data, we did what we could to optimize the network. And we were dealing with IAS, so there was only so much we could do. Honestly, if we'd had Nginx, we probably could have speeded up quite a bit more, but it was at a time when IAS was the only game in town for .NET application. In the end, we were able to get it to a acceptable speed and delivered it. But for me, it was one of those moments, those searing moments when I became very, very aware of the difference in performance that you get from having components that talk to each other in memory versus talking to each other in, across the network. And it really forced me to think about how you architect an application so that it works properly and efficiently over a network connection as opposed to an in-memory connection. So what does all that mean for microservices? Well, with microservices, you're essentially taking this SOA architecture that we built there and putting it into Hyperdrive. All of the objects that are within your application are going to be talking to each other over the network, and they're going to be using HTTP for that data exchange. And obviously, from Nginx's perspective, that's a good thing. That gives us a lot of ability to help you manage that communication process. And it gives us... And you can utilize all the features and functionality within Nginx to take advantage of that. And Nginx has been part of the microservices movement from the beginning. We are the number one application downloaded off of Docker Hub right now. The only two items that are downloaded more than Nginx are CentOS and Ubuntu. The largest microservices application delivery systems on the planet, Airbnb, Netflix, Uber, all use Nginx throughout their infrastructure to help them manage their HTTP traffic. And we have been working very diligently internally on microservices as well. We built a very robust reference architecture that we call the Nginx Photosite. It's essentially a photo sharing application that uses Docker containers for all of the core components of the application. We built it using all the different languages that you could use because we wanted to not ground ourselves in any particular language or system. We wanted to show that our solutions worked with whatever type of language that you were building with. So we have Python, we have Ruby, we have Node.js, we use Java, PHP, all the different languages that are popular with our customers out there. We built the system on top of that. We also use a 12-factor app design for the application. So our containers are stateless and ephemeral. They use attached resources as an approach to manage data. And it allows us to scale and manage our containers in whatever way makes sense within the context of the application system. And while we have been working with microservices for a while, we're also good at traffic management. And this architectural change has really introduced the advantages that I talked about before in terms of scalability and in terms of deployment. But it also introduces some challenges. And when you look at the, when you compare it, especially to the application framework of a monolithic application, you run it. You can recognize some issues, some things that we call the networking problems. And specifically, it's around service discovery. It's around resource management or in this case, load balancing and then how to tackle that performance and security problem that, you know, on a personal experience was very searing for me. Let's talk about service discovery to start with. And I think it's always good to compare and contrast, you know, a microservice architecture to a monolithic one because that's one that every developer is familiar with. So when you are working in a monolithic application and you have one object that wants to talk to another, the VM takes care of all of that communication protocol for you. You know, when you create the new object, you can just call the method and the VM will handle the pointer reference or the object reference communication between the two objects and you don't have to worry about it. In microservices, it is not nearly as clean. You have to have a much more aware system to make that service discovery process happen. Typically, there's a service registry of one sort or another. You know, in the case of Kubernetes, it's typically at CD. And it is a database, essentially, that contains all the information about your services that are running and available, what the IP addresses of those services are and what the port numbers are if they're running without an overlay network. The second issue that microservices introduces, again, in comparison to a monolithic application is how do you utilize your resources effectively? You know, you want to be able to, if you have three instances of a shopping cart instance, you want to be able to distribute your request to those different instances of the shopping cart using the resources of the application most effectively. So you want to be able to distribute them between the three. You want to distribute them to the one that's responding the fastest. You want to distribute them to the one that is closest to the object that's calling it. And you want all of that to do it effectively and transparently for you. But you also want the developer to be able to configure the load balancing mechanism to match the profile of what their system needs. So for example, if you have a stateful service that you need to connect to, you want to have a stateful load balancing scheme that you can take advantage of. So all of these things are very important in being able to utilize your resources effectively in a microservice application. And then the third issue is security and performance. And as I mentioned before, the issue of performance is one that is always present in my mind in terms of designing a microservice application. Being able to vary effectively and quickly utilize your resources and your services so that you can respond quickly to a request is really critical. The flip side of that, and I think we've been able to do that effectively and do it easily. But the flip side of that is that you are exposing all of your data across the network. Microservices typically use HTTP and JSON packets as the payload for data being transferred between different systems. And if you're able to tap into a network, into the network of your microservice application, you could listen in and hear all of the data of your application being transported and be able to read it fairly easily. For some types of applications, that is an unacceptable risk for your system. And so the solution, of course, is to add SSL encryption for the communication between the different services that you have in place. The problem is that SSL really exacerbates the performance issue that you've been trying to mitigate and working very hard to overcome just in the architecture of the system. As you can see on the diagram, we have sort of a prototype of what a service call looks like between two microservices using SSL as the protocol for communication. And essentially what happens is the Java service would be creating an HTTP client that would go to your service registry and using the DNS and request an IP address of one of the user manager instances that it wants to talk to. It would get back that IP address. It would start the SSL handshake process, which is a nine-step process to fully complete the key exchange. It would then make the request to the user manager, get the response back, consume that data, close the connection, and garbage collect that HTTP client that it created. And for every request to the user manager or any other service that it was doing, it would go through that same process in order to get that data. And that's a fairly intensive CPU-intensive process, and it adds many hundreds of milliseconds to the request process. And especially as you start having a deep call chain, that becomes a significant problem. So we think that we have solutions that address all three of these networking problems. Specifically, we have a solution that is very focused on answering how to do really robust service discovery. Our architectures address the load balancing issue and how to utilize your resources effectively. And we have a solution for really improving the performance of the encryption process so that you get a 77% increase in performance when using our architecture versus a straight SSL solution. So let's get into the architectures. We've come up with three different models, and these architectures are not mutually exclusive. In fact, there are good reasons for mixing and matching them. The three models that we're going to be talking about are the proxy model, the router mesh, and the fabric model. The fabric model is the most complex of the three and kind of puts load balancing on its head, so we're going to spend probably the majority of our time addressing that. But they are all very robust and different use cases really deliver on those... Different use cases require different models, and depending on what you need to do, we think that at least one of them will satisfy your needs. All right, so the first one is the proxy model. And this model very much reflects the way that most people use Nginx within their application. And a lot of people use Nginx in this capacity with monolithic applications as well. Essentially, it's the idea of putting Nginx in front of your application to deal with inbound internet traffic. The Nginx instance in this case could do things like SSL termination, traffic shaping insecurity. It could provide a caching layer to improve the performance of your application. Many of our customers use Nginx open source. We also have our Plus product, which provides you with things like robust load balancing and a better ability to do dynamic service discovery, which is very valuable, particularly for microservice applications where you are scaling the individual services up and down and having a changing pool of applications as the system needs to respond to different levels of traffic and different types of requests that are coming in at any given time. And OpenShift is really designed around this because it uses Kubernetes. There's the Ingress Controller model, which we have a solution around, and I'll be talking about that a little bit more in a couple of minutes. So, again, the proxy model is really focused on dealing with internet traffic. You can think of it kind of as a shock absorber for your application. And with our Nginx Plus commercial product, we have the ability to do that dynamic connectivity back to your ever-changing pool of microservice applications. When we have been working, we've been working with OpenShift 3.3 and have been able to actually implement all three of these models with OpenShift. And I want to take a moment to talk about how we did that. So, with the reference architecture, we have a proxy model system that is kind of an Ingress Controller abstracted, an abstracted Ingress Controller functionality. For our application for our reference architecture, we have included authentication in it, so it has an OAuth agent that does authentication for all of the traffic coming in and attaches an authentication token to the request so that as it's passed down the stack, the person and user is identified through the system. Unfortunately, Kubernetes does not support authentication right now, so our proxy model doesn't fit into the Kubernetes Ingress Controller format. However, Nginx does have an Ingress Controller system that we have open sourced and made available. You can download it off of our GitHub account. I believe it's Nginx at the Nginx repo, Ingress Controller. And we have both an open source version as well as the Nginx Plus version, which provides some extra features that you can take advantage of. Some things to know about the OpenShift Ingress Controller implementation. It does require you to play around with the permissions. As I mentioned before, one of the things that we really like about OpenShift is it has a more robust security model than the standard Kubernetes system, but that also poses some challenges in terms of what applications and what parts of your application get access to the API. And because of that, we're going to be publishing a blog post around utilizing or implementing the Ingress Controller within OpenShift. So know that we will be giving you some information about how to implement that for your systems on OpenShift. All right. So one of the things about the proxy model is that it is very focused around that edge routing scenario, the use case of dealing with internet traffic coming into your micro-services application. And it doesn't really concern itself with how your micro-services talk to each other, starting to get my horse voice again. So the router mesh model is really focused around trying to provide a more robust system for managing your internal traffic. We do recommend that you have some sort of edge routing management system, so a proxy model-like system to deal with internet traffic. But then within your application, we have built out a what we call the router mesh, and it works in the capacities where each of the services calls the router mesh to distribute requests between the different applications, the different services that you have available. So in this diagram here, if the pages of micro-service needed to talk to service to, it would make a request to the router mesh that would be able to call the different instances of service to and do things like properly load balance it, do things like cache some of the data, and even provide features and functionality like the circuit breaker pattern, which I'll talk about in a moment. The router mesh is a system that hooks into the Kubernetes API and monitors the event stream of service changes that Kubernetes emits. So it is regularly updating the services that are available and the instances that are available for each of the services. And we do those through an agent that's running alongside within the container of the Nginx router mesh system that is listening for those systems. And then we also utilize our resolver feature in the Nginx Plus version to dynamically make requests to each of the services that we're load balancing against to update the pool of instances on a dynamic basis. This is a very powerful system because it really centralizes your request management and gives you an ability to really track the performance of the applications within your system, a centralized place for dealing with all of the metrics that are coming out about traffic in your application and a good place to implement something like the circuit breaker pattern. For those of you who are not familiar with the circuit breaker pattern, it is a pattern that is designed to really provide resilience within your application. It utilizes active health checks to check the health of your microservices to make sure that they are available and ready to respond. And one of the advantages of using active health checks is that it allows your microservices to do an introspective analysis of whether or not they are in a healthy state rather than waiting for the service to actually fall over and die, which is what most passive health check monitors do. They don't have an ability to analyze the individual health elements of the application. If you have a service instance that is unhealthy, the RouterMesh can do things like route requests to other services itself. It can also use retry logic to retry the connection as it becomes available. And in the worst case scenario, we can provide cash data, even if the entire service is down, we can continue service continuity by using old-stale cash data that is available particularly for read-type services. That is a very valuable feature. For example, in our reference architecture, we have a content service that provides data to fill some of the pages on our application, and that service can die altogether, and we can continue serving up the pages because we have that content cash at the RouterMesh level. So as I said, it really gives you robust service discovery, and I'll talk about that mechanism shortly. It allows you to utilize all the data and allows you to utilize all of the advanced load balancing features within Nginx. Rather than just your simple round-robin system, it can take advantage of more robust things like lease connection or lease time load bouncing. And it will allow you to implement the circuit breaker pattern. In terms of the OpenShift implementation, it has a Kubernetes event listener, and it again ties into the Kubernetes API to get the service information and the instance information from Kubernetes. For each of the services that you want to load bounce, you will need to add an LB underscore service environment so that we can know which services you want to utilize. Each service needs to be implemented as a Kubernetes service. So in the type definition, you need to say service. And like the Ingress controller, it needs to have privileged access to the API. So you need to play around with the permissions model in order to give the router mesh that capability in order to work within your OpenShift system. All right. So the final model is what we call the fabric model. And like the other two models that I talked about, it really benefits from having a proxy model-like system in front of the application to handle that incoming HTTP traffic. Where it differs from the other models is that instead of having a centralized load-balancing system, what we've done is pushed load-balancing down to the container level so that each of the containers has an instance of in this case, managing all of the traffic that is both coming in and going out of the HTTP of the container using HTTP. The big benefits that you get from this are, you know, a robust service discovery model, really powerful load-balancing features, but most importantly, you get high performance and encryption automatically within your system so that you can have a very high-performance, stateful encrypted network within your OpenShift application. So I always find it's useful to go back to this diagram to talk about the process again. So, you know, let's go through that process of where the investment manager instance up at the top needs to talk to the user manager instances, one of the user manager instances down below. You know, in this case, the Java service would create an HTTP client. The client would then do a DNS request to the service registry and ask for an IP of one of the user managers. The service would, the registry would respond with the IP address. The HTTP client would go through the SSL, the nine-step SSL key exchange process to establish the SSL connection. It would make the request. It would get the response. It would close down the connection. It would garbage-collect the HTTP client, and it would go through that process for every single request that you have for your microservice application. In the Fabric model, you can see that having NGINX Plus in each of the systems changes around the way that the communication between the systems works. And I'm going to go into detail on how all of this happens in just a second. So here you have that same Java service. And it is, instead of talking to the user manager or even the service registry, it's talking only to NGINX Plus here. When it creates an HTTP client, it talks to local host and a route that would be user-manager within the NGINX Plus instance. And NGINX Plus would manage that connection to all of these systems. Instead of having that service discovery process happen, what we have is NGINX Plus has a resolver feature within its application, within the application that runs asynchronously and is regularly checking the service registry for all instances of the user manager and adding and subtracting those from the load balancing pool on a regular basis. So it doesn't need to make a request to the service registry every time the Java service wants to make a request to a user manager, it only needs to do it every three seconds or so. So you reduce the load actually that you're hitting the service registry and getting that DNS information for. It also, because it has all the information about the instances, it can make a much more intelligent decision about how to load balance the request. And one of my favorite load balancing schemes for microservices is to use the least time connection load balancing scheme. In least time, the NGINX Plus instance evaluates which instance in the load balancing pool is responding the fastest and it will skew the requests that's to the instance that is responding the fastest all the time doing a moving average of which instance is responding the fastest. This has a benefit of also sort of biasing the request chains to instances that are local to the system. So if you have large systems that are the hosts for your Kubernetes application for your OpenShift application, many times your instances of your microservices will be on the same host and the least time connection will bias towards those instances because it's always evaluating the response rate of the system. Finally, NGINX Plus, because it's talking to another instance of NGINX Plus which is always running on each of the containers, it can create a connection to the NGINX Plus instance between NGINX Plus instances. Using keep alive D, it can maintain that connection and reuse that connection over and over for all of the requests between the Java service and the PHP service so that you don't have to reuse the... You don't have to recreate the SSL key exchange process over and over. In our tests, we found that there was a 77% increase in connection performance because of that. Obviously, you can also build in the circuit breaker functionality with the instances of NGINX Plus using active health checks. We have that retry ability and caching logic. We also have a much more robust ability to deal with service failure. If you have alternative service options or you really understand the failure profile of the services, you can build in things like rate limiting. You can build in things like backup service options as to what you want to do in case that service is unavailable. There's a lot of power and flexibility in terms of how you implement the circuit breaker pattern within the fabric model as well as within the router mesh model. The fabric model provides robust service discovery as I described. Very advanced load balancing features. You can build in the circuit breaker pattern and most importantly, you get high performance SSL, a stateful SSL network within your environment within the application. In terms of how we implemented this within OpenShift, each application needs to run as a Kubernetes service. We found that naming the ports within the YAML file was very beneficial because it helps you, if, for example, you want to run most of your services over HTTP but you want to have some sort of access for health checks or something on HTTPS where your application runs but you want to have some sort of health check for HTTP, you can name the ports and utilize that in the service discovery process to get back both the port number and the IP address. The implementation proved to be very, very clean for us in terms of implementing the fabric model and we were able to get some very good performance out of it. I would be remiss if I didn't say that there are some issues with implementing the fabric model. The first, of course, is that Docker recommends that you use one service per container. The idea here is that you should not have multiple things running in a container. You don't want it to be a VM. You want to keep your Docker images simple. More importantly, it means that application failure within the container means container failure as well. But Docker recognizes that there are a lot of instances where this doesn't apply and is very restrictive in terms of what you need to do in order to implement your application. So it is only a recommendation. We have worked really hard to try and keep this as simplified as possible and in fact have come up with a solution where process failure of either your application code or Nginx causes the container to failure as well. So you get that close association between container failure and application failure within the fabric model as we built it. Finally, and I think this is the other issue is that using the fabric model, you do add another layer to the stack. It does provide a lot of power to the development team. And we think that for companies or organizations that need to have encryption within their system, it really provides you with high performance and you don't have to sacrifice any of that performance in order to really make your application secure. And we've built out a bunch of tooling to make this process simpler and not have to force you to go through all the complexities of implementing reverse proxy SSL settings within the Nginx configuration. We have a configuration generation tool where you essentially have to define your service endpoints in a YAML file and we will do all the rest of the work for you. So that's my presentation. Thank you. Does anybody have any questions? Well, you did an awesome job and you didn't lose your voice so I'm pretty impressed with that. We're almost at the end of the hour too. There haven't been any questions in the chat which means you've done an awesome job or you've stunned and amazed everybody. So, one, you mentioned a couple of times that you're writing a blog. Is there a link or anything to some reference documentation on your OpenShift implementation that you have today to share? We have the old blog post that we did for the original implementation and that was on OpenShift 3. That is up. Given everything that we've seen in 3.3, I think we're going to be revisiting that and updating the blog post because 3.3 really delivers on the vision and allowed us to implement all of the router mesh, the fabric model as well as the English controller. So expect to see more blog posts from us shortly. We've really been enjoying working with it. All right. Well, let's see if there's anyone else. There's one question coming in from Aresh. I understand the Nginx Plus is the commercial offering from Nginx which supports running Nginx within containers. Is the Nginx Plus implementation closed source? What are the benefits over HA proxy together with console or OpenShift internal capabilities? That's a good question. And yes, particularly for the fabric model Nginx Plus is required in order to make the system work effectively. And the biggest feature that we have in Nginx Plus and it is a commercial closed source implementation of Nginx. The feature that really makes the fabric model work, excuse me, is the resolver feature. And that's the ability for Nginx Plus to do that service discovery against the DNS and change the load balancing pool of the instances that we're connecting to dynamically so that it is regularly responding to changes within your environment. As opposed to HA proxy, the things that we bring are our resolver is much more robust than the one in HA proxy. HA proxies, honestly, it looks like they've deprecated its functionality. They never implemented the SRV record capability which is something that we use extensively and is the reason that you do that port naming recommendation that I provided. Also, HA proxy does not allow you to implement the circuit breaker pattern within the application. So that's the downside of using HA proxy. Those are the downsides of using HA proxy and some of the benefits that you get from Nginx Plus.