 Hello everybody and welcome again to another OpenShift Commons briefing. This time we're really pleased to have some of the folks from Nginx with us. Michael Pleszkopf will be talking about, as it says on the screen, load balancing applications on OpenShift. But he's also going to be talking about their new Nginx Plus Ingress Controller 2 and how to use that and configure that with OpenShift. And I'm really pleased to have Michael here with us today. So the format is ask questions in the chat. We'll have live Q&A at the end. This session is being recorded. So it will be up on blog.OpenShift.com in a couple of days. So you can review it there or look on our YouTube playlist as well. So without any further ado, I'm going to let Michael introduce himself and take it away. Thanks, Michael. Well, thanks again. Thanks for inviting us. It's a pleasure to be here. So today we will talk about load balancing applications on OpenShift with Nginx and Nginx Plus. Here is a little information about me. My name is Michael Pleszkopf, and I work as a platform integration engineer at Nginx, and I'm based in Cambridge, United Kingdom. And here's my email, Michael at Nginx.com, if somebody would like to connect with me. Here is our agenda for today. So we start with a discussion about Nginx and Nginx Plus and why they are relevant for load balancing on OpenShift. Then we will compare Route and Ingress Resources, so the OpenShift native load balancing resource and the Kubernetes native load balancing resource. Then we will introduce Nginx and Nginx Plus Ingress Controller and show how to deploy and use Ingress Controller in a live demo. Well, I hope that it sounds good. So let's start. What is Nginx? As many of you know, Nginx is an open source reverse proxy load balancer content cache as a web server. From the W3 text figures from August 2017, Nginx was number one among number one web server, among one billion top busiest website in the world. It is also number one image on Docker Hub. And why is Nginx relevant for load balancing on OpenShift? Well, first of all, it is a very high-performance load balancer with a low memory usage. It's been around for more than a decade, so it is stable and very well tested. Support for graceful load allows you to configure Nginx very often, which is crucial for running for dynamic environments like Kubernetes or OpenShift. It has many advanced features, for example, SSL termination, advanced content-based routing, authentication and access control to protect your application, as well as connection requests to write limiting, again, to protect your application from tedious attacks. Support for HTTP2, as well as many other features, which are very relevant for yet load balancing. Well, another important thing is flexibility. And by that, I mean that you can deploy Nginx or Nginx Plus as an OpenShift application. You deploy and manage Nginx as a native OpenShift application. What is Nginx Plus? Nginx Plus is our commercial version of the product, which is built on top of Nginx. It comes with several advanced features, active health checks, session persistence and advanced load balancing methods. Those are advanced features for application delivery. Nginx Plus also comes with several APIs. For example, the API for dynamic configuration, the API for getting the various status information, and the API for its built-in K-value store. Those are features allow you to easily manage Nginx Plus and also monitor Nginx Plus as well as the application that you will balance. There are a number of security features. One of them is support for authentication of JOT web tokens. Another one is that Nginx Plus supports several web application firewalls. Also, Nginx Plus is supported by, you can get support from Nginx. There are how to use Nginx or Nginx Plus on OpenShift, and there are two main options. The first option which is on the right is to configure Nginx Plus using its native configuration and also utilize the DNS service discovery to make Nginx Plus discover application endpoints. This is only available in Nginx Plus with its support for DNS service discovery and a survey records. And you can read more information about this method if you follow the link on your screen. Another option is to use Ingress Controller. With Ingress Controller, you configure Nginx or Nginx Plus through Kubernetes Ingress Resource, which is a native Kubernetes resource for creating a load balancing configuration. And in this case, Nginx Ingress Controller, which is a special software that takes care of creating Nginx configuration based on the Ingress resources that you deploy. And in this talk, we will learn about using Ingress Controller. So let's compare Ingress resources and routes. Routes are OpenShift native resources. And they appeared in OpenShift long before the Ingress resource would appear in Kubernetes. Rout offers several features, such as support for HTTP or HTTPS load balancing and SSL secure TCP load balancing as well. It supports path-based routing as well as SSL termination between the router and the backend applications. Weighted load balancing across many services is also supported. Ingress resource, on the other hand, offers a very slightly limited number of features compared to the OpenShift routes. However, users have many options with what load balancer that they can use with Ingress resource. For OpenShift Routes, there are only two load balancers. It is either HE Proxy, which comes with OpenShift by default, or F5 load balancer. With Ingress resource, there are several Ingress Controller implementations for different load balancers, including NGINX as many other popular load balancing solutions. It is worth noting that Ingress resource in OpenShift is currently in tech preview. Here we can see on the screen a route and Ingress resource for particular load balancing requirements. We have an application with the hostname www.example.com and we have one path-based rule. We want the request with the URL that starts with test. We want those requests to be load balanced to our service with the name service name. You can see both route resource and Ingress resource for those requirements. As you can see, they are very similar. As I said, there are many options for load balancer that support Ingress resource. You can use Ingress NGINX with Ingress resources. Moreover, there are several two Ingress Controller implementations. There is an implementation which was developed by NGINX in the community, and there is also an implementation which was developed by the Kubernetes community. You have multiple options. It was nothing that NGINX support implementation NGINX Plus. Information that I will be talking about in this session will be relevant for both NGINX Ingress Controller, either ours controller or the community one. NGINX Ingress Controller is deployed in a container. In a container, you have NGINX as well as Ingress Controller software. The Ingress Controller software watches Kubernetes API for any deployed Ingress resources in the current cluster state. When the Ingress resources change or the cluster state changes, the Ingress Controller software reconfigures NGINX. In our demo, what we will do, we will deploy the NGINX Plus Ingress Controller on OpenShift. We will deploy an example application and configure load balancing for this application using Ingress, and then we try to play with the application by scaling it up and down. Okay, so let's open our terminal window. So on my local machine, I have an OpenShift cluster running in virtual machine. So it is a one-note cluster created with MiniShift. So what we will do first is we will deploy NGINX Plus Ingress Controller. To do that, first, we will need to create this service account for our Ingress Controller container. In the create service account command, we will create this service account and name it NGINX Ingress. It was worth mentioning that currently I logged in as a user, as an admin, and we will deploy the Ingress Controller in the default project. So let's create this service account, Ingress Controller. Now we will create the role for our Ingress Controller. So let's take a look into that role. This role allows Ingress Controller software to communicate with Kubernetes API. So we explicitly define all the APIs that the software needs access to. Let's create the Ingress role. Great. So now what we will do, we will, once we created the cluster role, we will bind this role to our service account admin policy command. So we add a cluster role user, which is a service account. We will add the cluster role Ingress Ingress to our service account. So that service account is system service account on default on NGINX Ingress. Great. So the last step regarding to permissions is to add another policy. It will add the privilege policy service account, because we have created, okay, I made a type of somewhere here. So we added the privilege policy to our Ingress Ingress service account. So this is required for us for three reasons. So we need to sign in NGINX. This needs to bind to the privilege ports 80 and 443 to the node where it will be deployed. The second thing is NGINX is running as a root user inside the container. And the third thing, NGINX Ingress Controller software writes configuration into the root file system in the container. So for those reasons, we added the privilege policy to our Ingress Ingress Ingress service account. So now let's deploy the Ingress Controller. We will create the default, we will deploy a secret with default SSL certificate and key that is used for the default server in NGINX Plus Ingress Controller. Great. And now we are ready to deploy the replication controller with the NGINX Plus Ingress Controller. So let's look into this definition file. So we will deploy one replica of the Ingress Controller. As you can see with the service account name that we just created. And so the image that you see NGINX Plus Ingress is already available in this cluster. And we also map the ports 80 and 443 and 8080 for the same ports on the OpenShift node. We can run the GetPost command and see that our NGINX Plus container or port is running. So let's try to connect to it. So NGINX Plus comes with a dashboard. And this dashboard shows you the metrics, the real-time metrics. So you can quickly see what's going on in this particular NGINX Plus instance or container. So currently NGINX Plus is not configured to load balance any application. So what we see here is a very limited number of metrics. But now that we have deployed the Ingress Controller, let's deploy the demo application and configure load balancing for this demo application. So let's go back to our presentation. We will deploy the Kafe application. And this Kafe application consists of two services, the coffee service and the tea service. And each service is running in a separate replication controller. And those services are very simple by replication, as you will see. Mika, can you make that full screen? Yeah. Thanks. Great. And on the right, you see the Ingresser source for configuring the application. But let's deploy the application first. So first we will deploy the replication controller for the coffee service and a service for the coffee service. And while the coffee pods are being created, let's just quickly take a look into those files so that you see that there's nothing sophisticated here. So in the coffee replication controller, we will, with this replication controller, we'll create two replicas of our coffee container from the image NGINX demo scroll. And we expose the container port 80. It's the coffee service. What we have is we just create a service. And select all the coffee pods that we create with our coffee replication controller. And define the service on port 80. So the same port that those web application that we're using, the same port that application uses. So we see that we get the error. So I will try to delete the replication controller. The error that we can see is because I missed one single step. So what we will do? So currently I'm logged in as an admin. So what we will do is we will log in as a developer using another account developer and deploy our application using the developer account, not the admin account. So let's do this. But first, we need currently on OpenShift, you need explicitly allow users to work with Ingress resources. So we need to create a policy that allows a user to communicate, to create the Ingress resources. And then we need to bind this policy to the existing user. So let's do this first. And so we have a policy. It's called Ingress admin that we will create. And this simple policy, this cluster role, allows us to create, allows a user to create and manipulate Ingress resources. So let's create this cluster role. And now, now we will policy to the existing user. At this role to the existing user. So the user is the name of the user developer and the namespace where this user belongs to is my project. Well, another type of that we must perform is that our cafe application, our coffee application, our web application is running as root users. And by default, running containers without users is not allowed on OpenShift. So we must explicitly allow that to this. We will allow them to run for our particular user. So what we're doing is we allow for the default service account. In the project, my project, we allow this service account, the containers created with this service account to run processes with root users inside. Okay, now, now we can log in as our developer user. And we, we are residing inside of my project on my project namespace. So now, now we will create those, how we will create this, those replication controllers and services. So first, we'll create the replication controller and service for our coffee service. So one container is running and another is running as well. So similarly, we have the replication controller for the tea service, this service for the tea service. So let's create those. And those files are almost identical to the coffee files. The only difference is the name that is used. Let's check that the components are running. Great. So now, now we, we deployed the application. And previously we also deployed the ingress controller. So the last, last missing step is to configure load balancing. So let's go back to our slides. So what we want to have, so we have the application with two services, the coffee service and tea service. So what we would like to do is to expose this application through the cafe.example.com DNS name. We want to secure our application so that all requests are secured with SSL. And we also would like to define two pathways rules such that request that start with, with, with the URL that starts with t, slash t. Those requests will be load balanced to the tea service and request with the URL that starts with slash coffee will be load balanced to the coffee service. So those are the simple requirements and those requirements can be completely addressed by the ingress resource. So that on the right, you can see the corresponding ingress resource. So here in the, this resource in line four, we give you, give it the name cafe, cafe dash ingress from lines from line six until line nine. We configure SSL termination. So we specify that for the host cafe.example.com we need to apply the SSL certificate and key from the secret cafe secret. So we will also deploy SSL certificate and key in a separate resource called secret. And then we have from lines, from line 10 until line 21, we have two part based rules. So first we define the host name for application, the ingress name for application, cafe.example.com and then we have two part based rules that I was talking about. Let's deploy our cafe secret. And again, that contains the SSL certificate and SSL key. And let's also deploy, finally deploy the ingress resource cafe ingress. Great. So let's go to back to our dashboard. As you can see more information is now available. So if we, if we go to the service them tab, we see that we have deployed one application, cafe.example.com and if we go to the upstream tabs, we will see the containers of that application. So we have here, we have the containers of the tea service, three containers. And here we have the containers of the coffee service. So we have two containers. So what we can do is we can also try to access our application. So through the name dnsnamecafe.example.com and slash team. And if we refresh this page, we will see that the response every time comes from a different container. And as I was telling the publication that we actually are running is very simple. As you can see, we can return some information about the container where this application is running. And so we hit the slash t URL. And similarly we can hit the slash coffee URL as well. And we can get responses from the coffee containers. We go back to our dashboard. So we can see that the traffic was generated, the request generated some traffic which is reflected on the dashboard as you can see. Great. So what we can do now, we can try to scale our application. So what we will do is scale the coffee container, coffee containers. So we will scale the coffee replication controller from two replicas to five. So let's do that. So what we should see here is that those containers are created by Kubernetes. And then they once created, the endpoints corresponding to those containers are quickly added to EngineX. Plus by the ingress controller. So please note that so what I showed you except for the dashboard is also available for EngineX open source. It shows that the dashboard allows us to visualize EngineX plus configuration and get real-time metrics. However, this one particular difference between EngineX and EngineX plus which is used by the ingress controller. So the endpoints that we just added, we can actually scale back to one container. Let's try to do that. Now we have one coffee container. So such change as changing endpoints for EngineX plus can be done through the API. So it doesn't require changing configuration at all and doing the remote. So as you can see the current uptime is 3 minutes. But other than that and few other advanced features, EngineX and ingress controller for EngineX and EngineX plus, they block the same way. So let's go back to our presentation. So what we... So let's take a look once again at the ingress resource that we used. And so ingress resource allows us to configure simple part-based routing as a total termination and have multiple applications with different host names. However, this is pretty much it. Ingress resource doesn't allow you to do anything else basically. And as you know, router offers few more features. But however, there are several number of extensions to the ingress. And those extensions, we can use those extensions with EngineX. We'll show you in a moment. So EngineX is a advanced load balancing solution and it provides many configuration options. And those options can allow you to fine-tune EngineX behavior and also allow you to use advanced EngineX features. So how do we use those features? How do we fine-tune EngineX behavior? So there are two options. For fine-tuning EngineX. On the left, you can see the ConfigMap resource. The ConfigMap is another Kubernetes or OpenShift resource that you can create. And in the ConfigMap on the left, we put key-value pairs basically where we define various EngineX, where we set values for various EngineX configuration derivatives. So for example, here we set, we configure connection, timeouts, and when EngineX proxies the incoming requests. As well, we configure the maximum party size for request from a client that is allowed by EngineX. So EngineX English controller understand those key-value pairs. And when you deploy a ConfigMap resource with those pairs, EngineX English controller will configure EngineX accordingly. And here we only show three key-value pairs, but there are many other options for fine-tuning other EngineX behavior. Another option is to use annotations. So annotations is, again, it's a key-value string pairs that you can attach to any Kubernetes or OpenShift resource, including Ingress. So on the right, we have the Ingress resource, which is similar to the one that we have deployed in our CAFE example application. And for that configuration, we redefine the values to different values that we have configured, you used in the ConfigMap. So again, we configure on the timeouts and client max body size. And while the ConfigMap allows you to configure such parameters globally, which means that they will be applied for each Ingress resource that you deploy. Using annotations, it is possible to redefine those parameters and apply them only for a particular Ingress resource. Okay, so the next thing is how to use advanced EngineX features. And again, some of them are available to annotations, and some of them are also available as a ConfigMap keys. So here on the screen, you can see how we can configure JOT authentication, which is available in EngineX Plus. So we have two annotations, and through those two annotations you can use and configure that particular feature. And for other EngineX features, for example session persistence, proxy protocol, or configuring SSL termination, configuring SSL between EngineX and backend applications, we have special annotations for that as well. Also, we have a very powerful set of annotations, which is called snippets. So snippet annotations allow you to insert EngineX configuration, native EngineX configuration into the generated Ingress resource. For example, if you want to customize EngineX, or if you want to use some particular EngineX feature, which is not available through other annotations, you can insert the corresponding EngineX snippet using the snippet annotations. For example, here we configure basic HTTP authentication, as well as client SSL validation. So if you're familiar with EngineX configuration, we have HTTP context as well as server block and location blocks. So snippets are available for location and server blocks, and there is a config map key to insert snippets into the HTTP context. So the last, the final way you can use advanced features with EngineX is by simply customizing the templates. So Ingress Control software generates configuration from template, and you can customize the template and change it in a way that makes sense for your requirements. So this is one of the several options that you can use to configure EngineX advanced and EngineX features. Summarize Ingress resource is a Kubernetes way of configuring load balancing, particularly HTTP load balancing. And as we saw, it is easy to use, very straightforward to use. There are several load balancers that you can use with Ingress, and the number of options is greater than for a route resource. However, Ingress resource is slightly limited, then OpenShifter routes, and it lacks many important features. However, Ingress controllers support various extensions, and it can be annotations, it can be config map keys, or you can simply customize templates used to generate configuration. And so extensions is one of the powerful features of Ingress controllers. As for EngineX or EngineX Plus as Ingress controller, so if you want to use EngineX as an Ingress controller, what you get is high performance and stability, flexible deployment, because you deploy Ingress controller in a container, so you deploy and manage it as an OpenShifter application. As you saw, once you deploy Ingress resource, the configuration of EngineX happens very fast, and there are many extensions available that allow you to use different advanced features available in EngineX. And if you would like to use EngineX Plus as Ingress controller, you will get advanced features available in EngineX, such as real time metrics, session persistence, support for Jot, dynamic configuration, and a few others. And when using Ingress controller for EngineX Plus, it is officially supported by EngineX support team. So with that, I conclude my presentation. So here are a couple of resources that you will find useful. So all the YAML files, as well as Ingress controller, you can download from GitHub by following this link. You can also use, there is an Ingress controller container image available on Docker Hub. You can use it as well. If you would like to try EngineX Plus, it is easy to get a free trial. Just go on the website and fill out the form. And I mentioned another option for using EngineX OpenShift. It is to use DNS service discovery for discovering application endpoints. So if you follow this link, you will find a blog post, which will explain how to use DNS service discovery with EngineX Plus. Cool. And if you would like to contact me, there is an email as well. Awesome. Thanks, Michael. There's a couple of questions that have come up in the chat. The last one, Paul is asking, how does the traffic flow from EngineX to the container? From EngineX to the container. So this is a traffic between EngineX and application containers. So the traffic connections are established directly. So it's just from EngineX container to the application container, directly through the overlay network. So Paul, I'm going to unmute you so you can ask the question, your second question directly. If you unmute yourself, because I think it's a little bit more detail if you have a microphone. He's asking, are you using SDN or connecting directly via VXLAN? And if you want, Paul, you can unmute yourself and ask directly. So are you using SDN or connecting via VXLAN? So in this case, the English controller uses the same network that is used in the OpenShift cluster. So whatever option you will choose for networking between the ports in the cluster, so EngineX English controller will be connected to the same network. So it will be the option that you will choose. Yeah, okay. Let me just see, he's trying to unmute himself and it's not working. So hang on a half a second. Let me see if I can get that working. I'm just going to unmute everybody. No, I don't want to unmute everybody. That could be very noisy. You basically have to unmute yourself, Paul, if you go in and click on it, but it says it's disabled. So that was his question. There was one, a couple of other questions about, that I think got answered in the chat around this being a tech preview and when it would be available. And that's, I think, a roadmap. I think 3.5 is still in tech preview. I think 3.7 is coming out and it may be available on that, but I'll double check that. And then there was a great conversation about, is there a UI available for all of this? And are we making one available for it from within OpenShift? And that's always the great debate because people want to see the demos with the UI, but they actually use YAML or Ansible or something. So I think that was, I think they kind of quashed that whole conversation in the chat themselves. And that was my question. And if you can just throw some light on, if that, if a GUI is on the roadmap, or if that is not a planned entity at this point of time, that will help. Yes, Michael. Okay. So thank you for your question. So I don't have actually the answer for the question. So we'll have to contact OpenShift team. Yeah, that's what I was thinking. I don't think you guys are going to actually write it. I think it's going to be something that's got to come out of the engineering UI team, UX team at OpenShift or the folks contributing to it. You're always welcome to contribute something like that into the Origin project as well. Now I'll get upstreamed in if you have the time and energy. We'd love that. Let's see. I think that might be most all of the questions. I think it's going to be an interesting thing to see how people branch off, whether they end up using the Ingress or routes with OpenShift. And over time, probably it sounds like the extensions to Ingress will over, maybe overtake the routes. But we'll, we'll see where that all goes. But other than that, I'm not seeing any other questions unless I'm missing someone. So you've got it. Then I asked another question that if we can get the Pezzo along with the video. If there's a way if we can load this presentation somewhere. Yep. I will get the presentation from Michael and add it into the blog post, along with the video and the recording. So it'll all come as one package on blog.OpenShift.com. Okay. Great. Thank you. All right, Michael, thank you very much for a very good job on the demo. And we're looking forward to when it comes out of tech preview, we'll have you back on again and we'll be seeing about if we can coerce someone into writing some semblance of a UI for this from the OpenShift team or from the OpenShift origin community. Well, thanks there. Thanks for inviting us and looking forward to working with you again. Awesome. Thanks for coming. All right. Hope we'll see you all in Austin at KubeCon to talk about this and more or at the OpenShift Commons Gathering the day before KubeCon and Austin on December 5th. And if you're interested in coming to the OpenShift Commons Gathering the day before, reach out to me and I'll make sure you get invited. All right. Thanks. Take care, guys.