 We're good to go. OK, thanks. Hey, everyone. Thanks for joining our session today. We're going to talk about how Nginx and OpenStack work together. I'm Vaisal Memin. I am product marketing at Nginx. And along with me is my Kyle Pleshnikov, who's going to be doing a demo later of our integration with OpenStack. How many people here actively use Nginx? Very cool. And how many either give it a thumbs up or a thumbs down? All right. I see all thumbs up, two thumbs up over there. Thank you very much. So first and foremost, Nginx is open source software. We have been open source software since 2002 with the first release. Since then, we have grown to over 140 million sites, are now being powered by our software, by Nginx. But more importantly, or more telling to me, is that of the top 10,000 sites as ranked by volume of traffic, we have 50% of those sites running on Nginx. I'll have the quicker. And just a nice logo wall here that shows that if you use the internet today, you likely interact with Nginx at some point. The world's top technology leaders and largest enterprises all rely on Nginx to deliver their applications. This is Igor. He's the guy who created Nginx and founded Nginx Inc, the commercially supported company on top of Nginx. And the problem that he was trying to solve when he started Nginx was very simple. How to get more usage out of a single server, out of an existing server? How can I use the server to handle more users? And he targeted a specific problem, which was with the Apache web server. And in this graph, the larger number is worse. That's the amount of memory being used as an amount of concurrency goes up, or as the amount of concurrent users increases. We can see here that Apache starts consuming more and more memory as the number of users rises. We keep it pretty flat. And so he created Nginx as a high performance web server, something that you could use instead of Apache to deliver static web contents with high performance and low resource usage. We've since gone on and created Nginx plus that is our commercially supported offering on top of the open source Nginx. So now you can use a great open source Nginx product that you like, move it to the edge of your application, use it as your load balancer, use it as your content cache, use it to do security and authentication to your application. You can run Nginx plus just like Nginx anywhere. We have 10 million downloads of Nginx on Docker Hub. If you're into containers, we run on virtual machines in both private clouds as well as public clouds. We have an open stack heat template that Michael is going to be showing later that shows our integration with open stack. You can even run our software on a Linux server bare metal. And the clicker is not working today. You can use us to replace hardware load balancers. If you have hardware load balancers within your data center, within your enterprise today, you can replace those hardware load balancers with a software solution based on Nginx plus, reduce your cost by 85%. This is a quote from one of our customers from Discovery Education. They had an expensive upgrade. They were maxing out the bandwidth caps on their load balancer. They looked to Nginx plus and compared the numbers. And they found that with Nginx plus, they can get all the features that they wanted, all the functionality, but at a quarter of the cost of what were being quoted from their previous hardware load balancer. OK, thanks. Sorry, we're having some problems with the clicker. But we're back online. We're continuing to innovate at Nginx. And just in the past year, we have introduced UDP load balancing. We feel that UDP load balancing is an emerging protocol for the internet of things. It also covers existing use cases such as DNS, radio servers, infrastructure that most enterprises already have existing within their data centers. So exclusive to Nginx plus is active health checks and a few other advanced features. We have introduced support for HTTP2 to improve the performance of existing websites. With Nginx, you can start using HTTP2 with making no changes to your back end infrastructure. We translate HTTP2 traffic on the front end back to HTTP1 traffic that your application servers already speak. Service discovery. Within Nginx plus R9, our latest release, we added some nice functionality to better integrate with service registries. If you're using continuous integration and continuous delivery to push changes out to production environments, those changes can get pushed sometimes with little to no warning. If you're using an environment like that, the IP address and the port numbers of your services are dynamically and constantly changing. And you'll need a service registry to store the current IP address and port for each server. Nginx plus can now integrate with those service registries to automatically get the current location of services that we're load balancing to. We introduced a preview of JavaScript support within Nginx, and we're going to build out this functionality enabling you to use JavaScript to do pieces of your Nginx configuration. You can use this to do custom routing, security, and authentication within your application. We introduced a preview of OAuth 2 support. With this functionality, you can now offload OAuth 2 processing to Nginx. Nginx will do the OAuth 2 authentication for you and then feedback the user information to you in the form of HTTP headers that your application server already speaks. And we have some exciting new features planned for the coming year. And most importantly among those is a web application firewall. We've been talking about this for a while, but this is the year that we will see the web application firewall come to both Nginx and Nginx Plus. It will charge a small amount on top of the price for Nginx Plus to get access to the commercial rule set. And final word, before I pass it over to Michael, I just wanted to bring it back to Igor and his motivations for creating Nginx. He wanted people to use Nginx, and that's why he made it open source. OK, thank you. And Michael, off to you. Well, thanks, Faisal. And the goal of the demo I'm going to show you is to show how it's easy to configure, deploy, and manage Nginx Plus in your open stock environment and also to see various Nginx features in action. And before launching the demo, I'm going to explain what it does and how it works. Yep. So what we're going to do, we're going to do an HTTP load balancing of simple web application. And this application is deployed inside an virtual machine. And we're going to call such machine back end, or back end instance, or back end. And we're going to have multiple back ends because we're going to scale our application up and down. We also have Nginx Plus deployed also in a virtual machine. And it has connected to back ends via a private network. And Nginx Plus virtual machine also has a public floating IP address. So we can connect to it from outside of our open stack cloud. The number of back ends we're going to change by scaling our application up and down. OK, so how do we deploy such setup? So we're going to use a key for that. And we created the hit template, which I'm going to briefly show to you. This is the template file. And the important things to note here is we define back and count parameter. And through this parameter, we will tell Hit how many back end instances we want to be created. And in this Hit file, we define multiple resources. The first one is the back end instances resources of a type Hit resource group. And we say, and we pass our back and count parameter to account property of this resource. The next resource is Nginx Plus instance. And the important thing to note is we tell Hit to add the list of IP addresses of back ends, back end instances as instance of this instance metadata. And we're going to use this metadata to reconfigure Nginx. The last two resources are floating IP and floating IP association for Nginx Plus instance. Let's go back to slides. OK, how Nginx gets reconfigured? This is the question. So first step is Hit after it's done creating the back end instances. It's going to insert IP addresses of these instances into Nginx Plus instance metadata. The second step is on the Nginx instance, there's an agent that is running. And it's constantly monitoring the Nginx Plus instance for any updates in its metadata. And once it sees the updates, it immediately adds or remove back ends from Nginx Plus where it's on the flyer configuration API. And this agent is a simple Python application that we wrote. OK, so with that, I'm going to show you the demo. And we recorded the demo. So let's start from the first video. The first step is to deploy our setup in our OpenStack cloud. So we do this with creating the stack from this template file. And we specify the template file name. The name for our stack. And the important thing is we specify back end count equals 3. So we're going to create three back ends. OK, to the next video. So once Hit done creating the back ends, let's see what carries on dashboard. And we see that all the instances I did created. So what we're going to do now, we're going to connect to the other applications where public floating IP of the Nginx load balancer. And what we see is that the page that we get, every time we refresh this page, it comes from the different back end because we do load balancing. And now what we're going to do, we're going to go to the Nginx Plus live activity monitoring dashboard. And it provides you the real time statistic about what's happening in Nginx and the applications that you load balance with it. And we're interested in the upstream tab and it shows us all the configured back end in the Nginx. And we have our back end here with three instances. So what we're going to do now is we're going to scale the application up. So we're going to add more back ends. Let's do this. So we're going to issue stack update comment. And we specify the number of back ends 6 in this case. And we're going to look at the Nginx Plus dashboard in a moment. And what we expect to see is the new back ends should be added. And one thing to note, we also set it up health checks in Nginx. So every one second Nginx sends an HTTP request to back ends to check their health. And if they don't respond, Nginx marks them as unhealthy. So in our case, in this demo, the back ends are added before actual application, our web application services are started on those instances. So immediately Nginx will mark those back ends as down. So let's just see that. So you see back ends are added, but Nginx immediately marks them as failed. And once the web application starts running on those instances, health checks will stop failing. And Nginx will mark the service back ends as healthy again and start sending distribution traffic to them. So what we can also do, OK, let's just see how they one by one by one, they will get healthier. Yep, two to go. Perfect. OK, so we can also scale down our application with the same heat stack update command. OK, let's just do that. And so we're using a stack update command. And again, if you look at Nginx plus dashboard, we will see that what we're going to see is that the health checks for these instances that's going to be removed by heat are going to fail because heat terminates those instances before we can know that it terminated them. And that's another way to see these health checks in action. And after some time, those back ends are finally removed from Nginx. Well, I hope you like the demo. And I'm going to pass the mic to Faizo. Let me just put some final thoughts on the demo. So what I show to you is how it's easy to deploy and manage Nginx plus, and especially with dynamic reconfiguration option that we have. And I also show you a health checks in action and the live activity monitoring dashboard. OK, back to you, Faizo. OK, cool. Thanks, Michael. And we have about two minutes or three minutes left for any questions you guys might have. And if you think of anything later, or if we run out of time, we are at booth B20. It's just right back there, right behind the Intel booth. We have a question there. We were hoping to have that before the end of this year. Question back there. I'm sorry, I can't hear that last part. That's a good question. Is heat orchestration possible without Nginx plus? Oh, yeah, definitely. So yeah, heat orchestration is a part of the OpenStack services, so it's easy to provision your applications with heat. It's a very useful tool, and you see how it's easy to deploy our application and a load balancer with it. Yeah, so you get some advanced functionality with Nginx plus that makes the integration easier. OK, thank you guys for your time. Like I said, my name is Faizo. This is Michael. We'll be at the Nginx booth B20 back there. Please come by and see us. Thank you. Thank you.