 Good afternoon. My name is Ajay Huniari. I'm a product manager for NGINX, and in this session I'll be speaking about microservices patterns with NGINX first, and how they relate to the Istio services mesh architecture. This is a bit of a loader session, and I do have a demo at the end to show you how NGINX works within a Istio infrastructure. So I need to pray to the demo gods. I think, have you guys seen any of the demos? Am I the first one? Oh my gosh, all right, so I have to pray really hard for this one. So I'll try to cover as much as, and pretty much I'll try to speed the conversation into two, have 15 minutes with the marketing material, and I'll provide a bit of a background in terms of who the company is, and what are some of the reference architectures we built for microservices infrastructures, and then the demo. So how many of you are NGINX users? Show of hands. All right, well I guess I hope not to bore everyone here, and may be interesting. So first slide, it's all about what NGINX the company is. For those of you who don't know, NGINX has been around for quite a while. It started before 2004 when our founder and CTO, Igor Cisarev, tried to solve the C-Tank A problem, which was the ability to improve the number of concurrent sessions for a web server. At that point, web servers could do maybe in the hundreds of web server connections, concurrent connections. With NGINX, he was able to bring that number to tens of thousands. So he opened the first version of that in 2004. Fast forward about seven years, clearly it was well received by the community, and he couldn't no longer manage the ecosystem on his own. So him and a couple of his friends, they got together, they created a company. The company was called NGINX Inc. Now, if you see what, two years later, after they've created a company, we built a commercial product, which is called NGINX+, which helps us keep the open source going. It's worth mentioning that around 80% of our functionality that we're building even today goes into open source first. Hence, this is great to be part of this conference. Whatever I'm going to be presented today is going to be open source. If you fast forward today, NGINX is a globally distributed company. We are backed by first class VCs, NEA being one of them. We have offices around the globe. Our headquarters is in San Francisco. Our offices are in the UK, in Ireland, Moscow, and Singapore. We have over 1200 commercial customers, and our technology is driving over 330 million sites. So as a follow-on to that, it's really interesting to note that around 40% of all AWS instances, our AWS instances are power brand NGINX. And if you look at the trends, we are still growing company. About 62% of the top 10,000 websites are actually running NGINX. We also lead the charge on the top 100,000 websites. And we're actually gaining on the top 1 million websites. We anticipate that around 2018, we will be over 50% of top 1 million websites. So you may be asking what helped us gain such tremendous momentum in the marketplace because it's one thing to have a small footprint infrastructure. But some trend has to be shifted in the market. So we believe that the trend was really the adoption of microservices. So if you take a look what's happening in the market, NGINX is very small footprint. It's fast. It enables you to scale up horizontally quite rapidly without actually using a lot of resources on the system. And the market recognized that we are the top downloaded container on Docker Hub in terms of number of polls and also number of stars. But the microservices are really the infrastructure itself is changing. If you see what's happening in the marketplace, first it was the packaging, portability, and the ability to share containers across ecosystems. Docker has done a tremendous job covering that. As a second part of that, it's really the orchestration aspect. Where you have to take the containers, you distribute them across the ecosystem, and then you need to stitch them together into applications, microservices-based applications. I would argue that Kubernetes has done an outstanding job at building a solid orchestration that can spray containers across the ecosystem. But then those containers, the orchestration alone, as you guys know because you're part of this conference, is simply not sufficient. You have to build a tremendous amount of custom tooling in order for you to stitch the applications together in order for you to run a CI CD pipeline, run multiple versions side by side in order for you to monitor the ecosystem. And last but not least, in order for you to secure them. And all that has to happen in dynamic context. So this is where the services mesh comes in. It enables you to stitch those services together. It enables you to provide a high level of security, enables you to monitor them effectively. And it enables you to be able to break down multiple applications running side by side through this ecosystem. So NGINX has been offering various microservices architectures in order for vendors to move from a simple deployment, maybe a legacy deployment, where they have just one egress load balancer to more sophisticated deployments where they actually care about the east-west traffic. So let me take you to the progression. So initially, we've deployed the proxy model. Maybe I should ask before I go forward how many people are actually familiar with some of the reference architectures that NGINX has been pitching for the last several years. That's pretty good. That's quite a bit of show of hands. So proxy model, it's probably the simplest of the three of the four that are here. It enables you to deploy an NGINX instance at the egress. And in this case, you probably only deal with traffic that comes into your network and goes out of your network. You allow the services to communicate with each other, so you don't really get in the way. In the egress proxy, it's more like if you have an edge load balancer as an F5 and you're not happy with the way that's working out for you because it's very hard to make changes to it, then we see customers are actually taking NGINX, placing it in front of an FI load balancer, and enables you to actually stitch out workloads based on application payload. The second is the router mesh. If the router mesh is where you're taking a monolithic application, you're beginning to peel it off into microservices. In that case, you end up in a situation where you have different environments working together. Maybe half of your network is working in half of your applications are actually VM-based, and the other half, you're moving into microservice, then in that case, you will have an egress load balancer that handles the traffic, and then you spread the traffic inside the microservice for management. For instance, if you have a Kubernetes environment running side by side with a VM environment, you can split the traffic on the egress side and stitch together an English controller in the Kubernetes environment to do the dispatching of your applications in the egress controller of Kubernetes. Now, the third iteration into this, it becomes when you are really interested not only in the north-south traffic on the egress side and simple Kubernetes deployments, but you are interested in managing the traffic in between services themselves. In this situation, we are recommended a fabric model where you are deploying an engine access instance on every one of your microservices, and these engine access instances communicate with each other, and also the services covered protocol, so it enables to stitch together services very much like Istio's doing but without a control plane. Now, the fourth iteration of this is where Istio comes in. It's the same fabric model with the ability to stitch together services, but also with a control plane, and the whole presentation today, it's going to be around services mesh going forward. So in the services mesh, if it is for you to take a look at what happens under the hood, you have a control plane, as was described in various other sessions today. In the control plane, you have three components. In the Istio control plane itself, you have the pilot, you have mixer, and you have auth. So it really enables you to provision the services mesh data plane, and enables you to monitor it and enables you to secure it. So since that's pretty well described, I'm going to spend most of the time looking at the services mesh on the data plane. On the data plane side, you actually do have two implementation choices when it comes to the services mesh data plane. One is service itself plus the proxy in a container, where you deploy maybe a product such as unit, which will be announced back in September. Unit actually enables you to run workloads within the same container that the services mesh is happening across. So in a sense, you're deploying just one element in your network. You don't have to deploy a sidecar. And the second implementation pattern, which are going to be described in here, it's when you run the service and the proxy side-by-side in a sidecar-like environment. So what have we done at Nginx? So we had various, as we said, we have four patterns that we've recommended to our customers. And the last one is the one that actually comes with a control plane that you can manage yourself instead of building a lot of custom toolings. You have now the option to deploy Istio and deploy Nginx as the services proxy. So in the demo that I'm going to show is we're going to be using Nginx as a sidecar proxy instead of the standard proxy that comes with an Istio environment. And some people actually asked us, there are mainly the non-Nginx users, why are we using Nginx as a sidecar proxy? Well, there are several reasons why. Number one is it's battle-tested. It's been around for 13 years. It has a lot of use cases that we've accomplished. It has powerful configuration directives over 650 of them. The current Istio implementation exposes some of the patterns, but there is a lot more to be done. And we're going to be showing you how the configurations are stitched together within a services mesh. We have also a highly programmable interface. We are able to easily add the wall and you're able to add the manuscript in order for you to program the proxy itself. And last but not least is the strong community backing with many third-party modules. And in the demo, I'm going to show you one of the integrations that we've done with Zipkin from our partners at LightStep. In terms of the architecture, what does the implementation look like? This probably looks pretty familiar to you. It's really the Istio control plane running with an Nginx sidecar proxy. So instead of actually deploying Envoy, you're deploying Nginx in this case. We want to make sure that what we do is this compatible with Istio adapters. So it's pretty much transparent, except for the installer that's going to bring in a sidecar for you. The whole thing should look pretty much the same way from the user's perspective. We want to be as transparent as possible, so the sidecar injections should happen just as you would through Istio deployment. And then we have support for rules for various policies, MTLS, monitoring and tracing. In terms of the architecture, as we mentioned, it's a transparent proxy. The proxy implementation, so the way this is being implemented, we have an agent that is deployed together with the Nginx container in the sidecar. The agent doesn't translate from auth and from pilot and generates those Nginx configuration files. And you can deploy Nginx as an HTTP proxy as well as a transparent TCP. And then we build the pluggable module that actually has been written in Rust and it communicates with Mixer. So we're able to sell the telemetry information, so you can digest in the same way as you would have done it using Envoy's proxy. There is one roadmap item which some of you have already requested and we've taken note of, which is GRPC. GRPC should be released. Right now, it's in preview. It should be released to customers in Q1 2018. So another thing that I believe that's very important with regards to this, this is an open source project. And you can visit today at github. This is the code name, the project name for it. It has various components. Engine mesh is actually the core component. It has an agent that runs, so you can make modifications to it. If you'd like to augment it with additional directives, right now it does a bare bones installation, enables you to run Envoy as if you would run Runan Istio as if you would with Envoy. It's beta quality, it's compatible with Istio 0.2.12. And we encourage you to participate. We're looking for contributors. We're looking to address some of the use cases you may have that you'd like to see EngineX running with. So now it's time for a demo. Okay, did I shift? Work? No, that didn't. There we go. Okay, so what I've done is I pre-staged this environment with a couple of things. First is, let me go to the top. I have a demo script to keep me in check. Let me show you what I'm going to show you today. So we're going to install EngineMesh initializer in order for us to swap the default proxy. We're going to deploy Istio as a demo app, which is if you've seen the Istio presentation, it's really the same demo app. We're trying to make sure that we're compatible, so you're going to see how things are happening under the hood, but running the same application. We'll examine the EngineX sidecar. We're out traffic, all traffic to R1. We only want to run use case now to save time. And we'll also demonstrate monitoring with Grafana and Prometheus as well as Zitken. So there are a few things that I've done before I got started. I've actually downloaded Istio, Istio-0-2.2.12, and EngineMesh from our EngineMesh depository. And I also installed Kubernetes, so let's see. It shouldn't be anything here, but I want to make sure the command comes up. Okay, there are no resources found. Let's take a look at namespace. As you may observe, it's a default installation. It has namespace default where the application will be running. It has Istio systems, and let's take a look at that. So this is an installation, a fresh installation of Istio, except for the installer, which I'm going to do separately. You have Istio certificate-related stuff, egress, ingress, mixer, pilot. And you also have Prometheus and Grafana for monitoring, as well as Zitken, which currently runs independently. It has not yet been integrated into Mixer, so we're going to run it as such. So let's go ahead and follow the first step and install EngineMesh installer. So what this has done is it tells Istio that it should install EngineX. Let's take a quick look at this file. So what it does is actually tells Istio that it should install EngineX, which is EngineMesh for us. It should pick it up from Docker depository and install that as a side proxy. Clearly, you have two images. You have the init, and you have the debug for this demo we're going to be using the debug. Okay, so that's the only change I should have to make before installing the Bookinfo app. Pretty much everything else should be transparent. It should be compatible with what Istio does today. So let's go ahead and publish the Bookinfo app. So what this does is it goes ahead and provision the pods. Take a look. They're currently running. That was pretty fast. So what you may notice is that each of these pods is actually running two containers, details, and just in terms of the overall structure of the Bookinfo app, maybe I should run it to see what it looks like. So before I do that, I have to figure out what the IP address is. So this is the Bookinfo app. And if I go ahead and refresh, I should see different type of reviews. So this is version one, version two, and version three. So this allows you to have a little bit more context in terms of what's happening here. You have a various pods, and then you have one details pod, one product page pod, one ratings, and three versions of reviews. So let's take a look at the product page. What does it look like in the container? Cube control, describe, pod, product. Okay. Well, let me copy paste this a little bit faster. So if you take a look at what's happening in that container, you have two containers. You have the product page where the business logic is really for this is the service logic for the product page itself. And then you have the Istio proxy. This is the Android mesh proxy which we're currently running. So let's go ahead and navigate into the container. Istio proxy. Okay. So we're inside the controller. Now, we're running process. You can see, as you can see that the NGX processors are currently running. You have a master node and you have the worker nodes. If you take a look at a config file, the configuration has several things in it. One is loadable module for the mixed configuration. This is the Rust module which I described. And then you have open tracing for Zipkin that has been done by the lab staff guys. So if you are looking into the, for those of you that are familiar with the HTTP building block, you have open tracing on and a bunch of open tracing configurations. And then you also see mixer, mixer server configuration. All these changes have been pushed down to us by the pilot. The agent has digested and created this configuration file. And one last thing. This is just a very high level. And then we are ingesting configuration files for the ingress and ingress ports. Let's take a look at that. Okay. So by convention, there are two types of various files out here. And some of the files are actually for internal consumption by Istio. But the files that are interesting for us is the files that are managing the booking for app, which actually happens to run on port 9080. So by convention, everything that has to do with ingress ports is going to have an IP address, the IP address of the pod and port 9080. So if you look at that, you look at HTTP 10.36.2.31. And you see that it has an upstream which actually serves the information from the sidecar proxy into the local holds port 9080. And this is listening currently on a port that is provisioned by Istio in the IP tables to actually push information to the local host at 20,007. And then it has the mixer configurations that have also been pushed by pilot. And we've actually configured them automatically. Okay. And then the other one, the ingress part. If you see here, every of the services, so if I do listen, if I look at listen primitives, you're going to see that we're going to have multiple server blocks, one per details, one per product page. That's the default server. One per ratings and one per reviews. And on the review side, the configuration file that has been set has an upstream. And the upstream is auto-generated. And the upstream, you see that you have three reviews. So this is the reason why when I actually, when I've done review, reload on the reviews page, you see that you're seeing three reviews pages, not one. Those are the directives in ingress controlling that. So if I do this again, see that I have multiple reviews. Version one, two, and three. So let me go ahead now and change the reviews to push everything into one single review of version one. So if I can do that by running Istio control, Istio samples, put info, cube, route, all to VM one. Oops. Oh, of course. I need to create. Here we go. Now what this has done, if you take a look at what happened in nginx, if you reload this file, you see that the out and the upstream has been modified now and it only passes one IP address and port number. So if you go back to the application itself, now you can see that I can do multiple type refreshes and only version one is being shown. Now if you go ahead and change this again and remove the route rules, oops. And I reload the file again. So what I'm trying to display here is that, what I'm trying to show here is that Istio passes the information as you would through the normal channel and nginx picks them up and it does it, it creates its configuration files really transparently. So you don't have to make any other modifications in order for you to swap out the default proxy with the nginx proxy. Now as you saw previously in the demo, I have actually provisioned Grafana and provisioned Zipkin. So I like to show you how they operate. In order to do that, I'm going to start, I have my cheat sheet here because I need to open the local port. So what I'm going to do here is I'm going to open local port for Grafana. So since it's been running in the background, I would expect to see already some graphs. This has been deployed in the Istio configuration file. So I pick the Istio dashboard. Here it is. You see graphs showing up. So if I refresh the page a few times, just to show this is real, I didn't just make them up. You should see a spike in the graphs. Here we go. You see the graphs coming in. Okay, an extra credit, let's see what a Zipkin also works. Once again, open the port in kubectl, control, a Zipkin, find traces, and here they are. Now you can trace the traffic, you trace the packets that are moving all the way in from the ingress side all the way in through the system. So you can find out what I have any delays or anything else. So that's pretty much it from the demo side. Moving down to representation, once again, it's open source, it's better quality, and we love to have your input. So that was all that I had for today, and I'm opening it up for questions. Right now, to restate the question, what are the things that the Android mesh does that default proxy does not provide? The goal for us was to keep everything backward compatible, meaning that we want to make sure that whatever we do in NGINX doesn't break anything that Istio provides for us. So we would try to stay in part. So at this point, it doesn't do things differently because there was never a goal. Moving forward, now we want to start exposing NGINX directives, such as single sign-ons, caching. We have several dozen third-party libraries, and you've seen one that was with Zipkin. We want to expose a lot of others, but we need to do that in a way that's compatible with Istio, so we need to provide it, we need to start mapping it at the top level, and then building it down to the system. And that requires some code changes. So if you're interested in some of that, and you'd like to accomplish a use case with NGINX, we're more than happy to take a feedback and build it out. At this stage, we do reload config updates, but we are very tactical, meaning that if all we're doing is actually changing the upstream, we actually can do it through the API, but we are, keep in mind, NGINX, the way it operates, it reloads the configuration, it does a soft reload, and therefore the workers will continue processing traffic, and we can decide how quickly we should age and how aggressively we want to age those workers. Does that answer your question? It's a several load. Go ahead. Did we have to make any changes to Istio pilot to enable the work? No, we did not have to make any changes to Istio pilot. We've actually we're mapping the commands that are coming through us through the agent, and then we'll create our configuration files that way. Can you please provide a question? I don't think I understood it. JWT authentication. I mean, I have the architect who built it here, so maybe it's one of the roadmap items. Go ahead. Is it possible to build PLS channels in between MTLS? So that I know the answer for, that's a roadmap item for us, and it's really depending on the GRPC code that is going to come in Q1 of next year. We're actually we're having some trials on it, so we should be able to do that in Q1 of 2018. Yes, it will be available in open source. Everything you've seen today is really open source. We haven't used any type of NGINX plus functionality, so the container is running NGINX. So passenger application, if you're integrating passenger application, it should be passenger application, it should be branch transparent, because as you've seen when we are, if you have a Kubernetes file, a YAML file that you're trying to push, just like Bookvivo would be, you push it in and NGINX will digest it, will create a configuration files for you, so you can run it transparently. In a sense, it's going to stitch up the security channels as well for you. So if you have multiple services, they'll communicate with each other just as if any other is to your app. Yeah, please. Oh, okay. Oh, I got it. So part of the, you mentioned that part of the deployment I have, I looked at the service to pick up the IP address. Yes, that is something that, we are running this all in Google Cloud, so it's provided by, everything is done live. I mean, it's an environment that we set up. It's visible. You probably can write the IP yourself. So it's provided by Google. Okay. How difficult was it to write, I'm sorry? Mix a client or us? I would say it's taken us a few months. So it's not a simple exercise. You're free to take a look. It's an open source. It's an Android mesh. You can open it up and see how here is the architect. You can have an off-site conversation with him. He's the one. And a small team with Michael and a few other people are the ones writing it. So thank you very much. It was a pleasure to be here.