 Hi everyone, I am Akansh and we would be discussing about on-way gateway and in-depth guide to its benefits, use cases and features. I forgot the title, sorry. Okay, with me, Shivanshu was supposed to be here, but us visa issues. So he's not. Apart from that, welcome everyone. And I just wanted to know how many of you have heard about on-way gateway or used on-way gateway? A lot. Okay, so the other half of the room, what do you use? Anyone? Okay, not a problem. So let's start with this on-way gateway introduction. So with this, we have introduction to on-way gateway. The agenda is here with architecture overview. We have a demo by Shivanshu. I'll play a recording about eight to 10 minutes. And then we can have a discussion on its features and how extensible on-way gateway is. So what is on-way gateway? So on-way gateway is a gateway which manages all the, just a minute, it's slightly confusing. Yeah. So what is on-way gateway? On-way gateway is a gateway which is actually managing on-way proxies. So it is a native gateway for on-ways. It is an XDS control play which will dynamically manage all the fleet of on-ways wherever, however, how many ever you have in the cluster. It provides batteries included, I would say, experience for on-way proxy. Whenever you install on-way proxies, whenever you work with on-way proxies, always their on-way gateway is natively supported. And we can call it as a wrapper on on-way proxies. Batteries included is one of the most famous phrases for on-way gateway. It provides an extensible support to a multitude of applications, gateway use cases and everything. And why would we use on-way gateway over any other gateway? So it satisfies most of the common use cases for any of the gateway purposes. So I would say it's not a full-fledged service measure application, but yes, it will support all the other use cases, all common use cases like traffic routing and management. Apart from that, it is very performant, high efficiency, high performance. It's very extensible, we can have rules, we can have filters, we can attach other proxies as well. We have dynamic configurations for on-way. It's very secure. And we have support for all the protocols available, UDP, TLS, HTTP and what else do we have, GRPC. And it has a seamless integration with all the service measures and all the other orchestration platforms. So this is a basic introduction to on-way gateway. Now let's start with the architecture overview. So we have an on-way gateway which listens to the static configuration and dynamic configurations and it manages all the fleet of on-way proxies that are there in your cluster. So a traffic request is coming in to the gateway. It will redirect it to the proxy and the proxy will then manage all the traffic and send it to your user application, wherever it is. A high-level overview of this particular diagram. We have a provider at the bottom. The provider listens to all the static configuration and the dynamic configuration that I have told earlier. Then it goes to all the translators. The translator splits it up to the infra IR and the XDS IR. And this is done to provide like if you have a Kubernetes resource, it will have a Kubernetes listener. If you have a file resource probably in the future, it will have a file listener. And for this part, we have split it into two parts because we can have control over public-facing APIs and we can have control over all the other internal APIs. Also, we all know that whenever we make a gateway resource, on-way proxy is a new on-way proxy instance is already up and running. So we can have numerous on-way proxies with different configurations running and which is being managed by a single on-way gateway. So N number of gateway resource will manage by single on-way gateway. So a little bit of extensions. So we can have a lot of extensions to on-way gateway natively supported. Here we have an example of an OAuth2 filter extension which is connected to our on-way gateway right now. Coming to a slight demo, we will continue with the features after that demo. The demo is talking about HTTP routing and it's like traffic management and everything. And then we will continue with all the filters and rules. Thanks for coming today and let's get started with the demo. So there are multiple exciting features in our gateway and let's see a couple of them in action. Yeah, can we increase the volume? Can we increase the volume? Thank you, Kansh and hi everyone. This is Shivanshu. Thanks for coming today and let's get started with the demo. So there are multiple exciting features in our gateway and let's see a couple of them in action. Let's start with the HTTP routing. So we need to create a gateway class which uses an on-way proxy gateway controller and then we need to create a gateway which references the gateway class and exposes a listener. So here we are exposing an HTTP listener on port 80 and then we can configure multiple HTTP routes which references to different backend services and attach to the given gateway. Here we are attaching it to the example gateway. So to see this particular example in action, I have created, so I've created a multiple four clusters and everything is set up in two, three and four clusters and let's set up everything from scratch in the first cluster. So first of all, we need to install the NY gateway itself for that I just need to run this hell install command which is pretty straightforward and while it's creating, installing NY gateway, let me show you the routes that I'm going to configure. So this is the gateway class, this is the gateway with the listener and then I'm configuring three HTTP routes with different services. So this route is using postnameexample.com and is exposing example service on port 8080. Similarly, this route is using hostnamefood.example.com with path prefix slash login and exposing foo service on port 8080. And then the service is bar service and if the header is canary, the request goes to bar canary service and if the header, if there's no header, the request goes to bar service itself. So let's see if everything is installed. So NY gateways installed and let's check if the CRDs are installed. The required CRDs like the routes gateway class and in my proxy patch policies, everything is installed. So let's go back to deploy our sample applications and let's also deploy the, okay, I need to be in the right folder, deploy the applications and deploy the route itself. Let's see if the routes are created. All the three routes are created. Now, okay, we are using Minicube cluster which means there's no external IP for the load-based service. So I need to start the Minicube tunnel so that I can get a local IP address for the external IP for my load-based services. Now that the IP is assigned, I can start sending the traffic. So let me first send the traffic to widthwithhostexample.com and let's see, okay, I need to give the password, cool. And the request is served by the example backend which is expected because my rules says if the hostname is example.com, the request is served by example service. If I send the request with header with hostname food.example.com and with path prefix slash login, the request should come from foo service which is correct, but if I remove the path prefix from here, the request should not be served because there is no rule defined. For foo service, there is no path prefix. So this is again working and if I send the request with the host header with hostbarexample.com, without any extra header, the request is coming from bar backend and if I send the request with the canary header, the request should come from the canary department and this is expected because I've added the header match rule here. So in summary, it's just as easy as creating the STP routes and configure them properly so that you reference the right backend services and define the right tools. Similarly, we can have request and response header injected to take a look at this example. Let me switch to another cluster. The second cluster has everything pre-configured so that we can save some time. We have the NR gateway and all the routes configured. So if I go back to the examples, all right. So here I'm just creating a filter which adds foo header. Now, if I send the request, but before sending the request, we again need to tunnel the traffic for Minikube so that it has the public external IP. Okay, now let Megan send the request but in the second cluster where everything is configured with STP routing and header injection is enabled. So I see the foo header is there. In my request, I am sending something but the foo is coming from the injection. So it's again as easy as applying a filter in your STP routes. The third example that we are taking today is Jot authentication. To configure Jot authentication, we need to create security policies which attach to the specific STP route and then we can have the Jot URI configured. For that, let me go back to terminal and switch to my third cluster. And again, tunnel the traffic to get the public IP for my gateway. Let's check if we have the public IP public as an external IP. Yeah. Okay, so for the Jot authentication, the routes that I'm configuring just the idea is to create a security policy and that security policy is attaching to a given STP route. And then there we can define the exact Jot URI Jot token. So I have two routes. One is foo and one is bar. And both are sending traffic to the backend service with slash foo and slash bar path prefix. But the security policies configure only for the foo STP route, which means if I send the traffic with slash foo, I should provide token, otherwise the request would be rejected. And if I'm sending the traffic to slash bar, and it's okay, the token is not required. So let's first try to see if the token is there. Okay, the token is exported. If I send the request to foo without the token, it should get unauthorized access. Yeah, and 401 means it's unauthorized. But if I send the traffic to bar, since there is no security policy, it should get to 100. Okay. Now, if I again send the traffic, but with the token, the BLA token is in the header, it should be 200 okay. So in summary, we can configure to authentication. It's just we need to define a security policy which attests to given STP route. In theory, like any route, it could be a GRPC route also. But GRPC does not use short. So for STP, it makes sense. Now, the last thing for today is rate limiting. So if we use, so for configuring rate limiting, all we need to do is configure this backend traffic policy. The backend traffic policy is again attaching a given route and we need to configure the rate limiting with the given request limits and headers. So here I'm configuring the rate limit with limit three requests per hour. And the header is accessor ID. So it means if this particular header is present, then only the limiting would work. Now let's go back to terminal and switch to third, sorry, fourth cluster. And let's start the tunneling. So here also everything is pre-setup. I have also this Redis deployment to cache the rate limiting to enable the global rate limiting in Envoy and cache the values in Redis. So the gateway class is configured and all we need to do is first step is having the Redis instance ready and then configuring the Envoy gateway to use that Redis instance. And then we need to define the backend traffic policy. So in this given backend traffic policy, as I said earlier, I'm just depending if the header is present, I'm limiting the request to three per hour. So let's try to send the traffic first without the header and it's all 200 okay. And if we send the traffic with header, I'm expecting only three requests to get accepted because the backend traffic policies says only three requests per hour. So if I send four requests in just two seconds, only three should get accepted, let's see. Yeah, so the fourth one is denied and because it says X limit, rate limit and it's rate limited too. So configuring rate limiting is very easy. Just you need to create a backend traffic policy as it's to the STP route and configure your rate limiting in the rate limiting part. So I think that's it for the demo. I would hand over to Akash to take from him. Thank you. Yeah, so we saw STP routing. Header manipulation, jot authentication and rate limiting. Now we come back to cross origin resource sharing. So for cross origin resource sharing, we are attaching a security client to our on-line gateway. Here we do some filtering over headers and methods. If we have X header one or X header two in this demo, then the traffic will be routed to the service, otherwise it will be not. So this is about cross origin resource sharing. Going to GRPC routing. So GRPC routing is recently supported and it is very similar to HTTP routing. We can see here the service is being handled by the port 9000 and it is not a lot different from HTTP routing. Coming to HTTP redirects, here we are using some rules and filters. On the basis of our status code in this demo, if the status code is 301, the whole traffic will be redirected to port 443 of the hostname example.com of the service. We can also have a multi-cluster service routing. So in multi-cluster service routing, we are exposing the multi-cluster service API of Kubernetes where we can do an export of service A from cluster A and import it into cluster B or service B. So here if we can see that we have made a rule to HTTP route where we are using the back-end service of cluster A on port 3000 with the routing to our cluster B. Next we do HTTP route request mirroring. A request mirroring is nothing, a request will come in. It can go to two back-end services. So let's just say we have one request which is coming in and it will be served to port 8080 probably and then we have made a new rule which will serve it to 3000 as well. So a new service. Next we have HTTP route traffic splitting. So we can do a traffic splitting on the basis of some rules or filters. In this part we have done, we have made a rule that every traffic will be filtered on the basis of like 50% of traffic will go to port 3000 of service of back-end one and 50% of traffic will go to port 3000 of back-end two. So we can do traffic splitting as well. This is the most exciting part for me. So proxy observability. In 0.6.0 release of Envoy Gateway we have proxy on observability as well. We can have Envoy access logs and we can look into metrics traces on Grafana Loki. So we can have a lot of observability on proxy, our Envoy proxies and it was not available before. So it's very exciting for me right now. So that's the end of the presentation. We have a couple of minutes I think. Do we have any questions? Yeah, so if I'm not giving weights to anything so the splitting will be done on 5050. If I maintain weights like 9010 or something like that then it will do a split on that basis. I have no idea. I am very sorry we can get back to you on that. I need to check on the docs. Yeah, so he told the answer. I did not have an answer for round problem. Anyone else? Okay. Could you quickly repeat the question? Sorry to say. Sorry? Could you repeat the question? Okay, sure. So can you tell me your name? Yeah, so Dave you were asked why are we using on-world gateway over any other like Istio or some other gateway, right? Yeah, so my answer would be that one, I prefer, I use on-world gateway over Istio because I am not a very big organization where I have to have a whole service mesh configured. I can have just a gateway and the traffic routing or splitting and that will get my work done. It is very lightweight, it is very extensible and it is actually supporting all the major protocols that I'm using, so UDP, TLS and GRPC if I am using and HTTP. Apart from that, it is like batteries included. It is a wrap, we can say it is a wrapper over on-world proxies. We can like manage all our on-world proxies with one single on-world gateway. So I would say because of lightweight, if I have to go with this whole service mesh, I will go with Istio. Yeah, that way, for Redis. So Redis was being used for rate limiting, right? Okay, so you want to know the configuration? Like how we are maintaining that configuration? So majorly, I also use Lua only but I think we have an A2 support to Redis now. But if we have, like if you want to discuss, we can discuss this afterwards as well. Yeah, yeah, so I would say I would like to reiterate my part about the wrapper part of on-world gateway. So on-world gateway is very lightweight and we can consider it as a wrapper over on-world proxies. So it's a very natural connection to on-world gateway, all the on-world proxies. So on-contour or as Dave mentioned Istio, I will have to have another set of configurations for on-world proxies or sidecars. So that is why I would use on-world gateway but if the use case is more nuanced or more extender, then I would use something else. Yeah, yeah, exactly. But on-world gateway is natively supported and that is why I would use on-world gateway, yeah. I can't speak to any individual project but I know a lot of the folks who've built these kind of things are working together and there has been talk of a lot of these projects rebasing essentially onto on-world gateway as like a sort of commodity layer. Eventually, obviously, you know, I can't speak for any company or any other projects roadmap but that's definitely been discussed, right? The similarities. Yes, please. As much as I know, yes. Yeah, there's a resource type where you can essentially punch through the abstraction and give, avoid configuration, direct. Yeah, yeah, maybe you stay tuned. Okay, anything else? Cool, thank you very much. Thank you.