 Hello, my name is Herman Lukoff. I'm working for JP Morgan. I'm super happy to be here. Before I forget it, if anything I want to get across today is really saying thank you for the community. Your work is so useful for us and so many and sometimes really unexpected ways. And that's really the main topic for my talk. So this talk is not so much about technical wizardry. It's really a perspective from an application team, really at the end of the Envoy food chain. And I want to talk about a few application patterns where Envoy really is very, very helpful for us to do our job. Since I'm working in the financial industry, we have a very healthy sense of paranoia when it comes to security. So security is front and center. And I want to talk a little bit about how Envoy is helpful for us to meet our company internally security standards. But I also want to go beyond that and talk a little bit about other application patterns where we utilize Envoy. And what has been, yeah, again, very, very useful for us. So for us, it's really not only about, I don't know, service match and the things that are in the current discussion. For us, Envoy is really, really helpful in some more down to earth scenarios. So if we look at our company wide security policy, one really very generic pattern is segmenting our service landscape into zones. And these zones came into place for different reasons. Sometimes you have historic reasons. These are applications developed 10 or 15 years ago. On different platforms. So they are physically located in different clusters and different server farms. You have applications that have different security requirements, regulatory requirements, compliance requirements. So they are separated out. And if you have an application that wants to consume these services, you have to go through a certain step or sequence of steps. So if you start downstream, we have applications coming in human to app. We have app to app on the right side. We have requests coming in with different levels of authentication with different levels of authentication through different IDPs. So we have a huge landscape at JP Morgan with half dozen of IDPs, all IDC based, older ones, legacy IDPs. And we have to deal with all these kind of requests coming with different kinds of tokens. And if we start downstream, we get a request coming into zone one. The first thing we need to do is to validate this token. And of course we can use a proxy envoy here. But we also need to actually exchange this token before we send it upstream. And by going from level to level, of course we go through firewalls. So we have a very regulated, tightly regulated process to open up firewalls. And so really the story is if you go from zone to zone, you have to do that revalidation every time. If you start with zone zero, you have to do that token exchange. And yeah, so the question is really how can we do that in an efficient way? This is a little bit going more into the details. We see coming in a request with a JOT assuming that request has been already authenticated. And then of course, once the request hits envoy, it goes through the filter chain. And I just point out here two different filters. The first is a JOT validation filter, which is extremely helpful for us because we have all these kind of different IDPs, different JOT configurations that we have to validate. So we can handle that all without code just with configuration. And then the second step is really that token exchange. And that's where we're using the external processing filter, which I think is a pretty recent addition to the filter community. This filter has been really, really helpful for us. It allows us to create a GRPC server that communicates with the Envoy Core. And all our token handling, token exchange logic is part of that GRPC server. So we have different scenarios. We might want to create a new token from a different IDP, or we have custom self-signed JOT, so it really depends on the use case. But we can really include all these kind of different variations very, very conveniently in that GRPC server. What is really great is that this Envoy supports UDS, Unix domain services, sockets, sorry. And the really, really nice thing about it, it's so simple to set up. I just point out here the goal line code. It's literally one line of code that you have to change going from socket to UDS. And in the Envoy configuration it is very similar. And UDS is just a little bit more performant, just a little bit more secure, just a little bit more resource efficient. So since all our requests go through Envoy, it's very, very helpful. Looking a little bit into this GRPC server, I think if I just look at our requirements at my company, I think there's actually a potential to further streamline that, maybe having some kind of Envoy filter, not only for the validation, but actually for token exchange. And we have actually started to think about a way to parameterize token exchange, like making it configurable. For instance, defining a configuration, how claims from incoming Jots float into the upcoming Jot or caching, etc. So it is something that we are thinking actually very intensely right now to make that more configurable and more generalized to make it more usable for a broader audience. But there are also other use cases I would just briefly want to touch upon. And they might look very simple, but they are really, really useful. So the first that I want to mention is cloud migration. So we started out in a private cloud Kubernetes environment. And we have been moving our services incrementally from private cloud to public cloud. And it's almost trivial, but of course having all the traffic going through Envoy and being able to configure Envoy so conveniently to go from left to the right, it's been tremendously helpful to manage this migration, which was a half-year effort. A little bit more interesting is a scenario where we actually unexpectedly saw a lot of value in Envoy. So we got a request from our product team saying, hey, we want to create a demo system for our application. Which, by the way, is story at JPMorgan.com, our one-stop shop for commercial real estate investors in JPMorgan. So if you own, I don't know, commercial real estate property, please stop by. And yeah, so the demo system is actually very similar to the real system, but it has certain restrictions. Like a user gets invited to it with a custom job. Certain requests are redirected where static data is provided. Certain postcalls, modifying calls, or writing calls are denied. And these are actually all things you can very conveniently handle just on a configuration level with Envoy. So we ended up to be able to implement these features very, very quickly with minimal changes to the original application code, which was great. And then the third one I want to mention is it's more about our own development lifecycle. So we have multiple teams working on our applications. So we have more than 100 developers right now working on that site. And we have different application teams. We have also what we call a foundation team that builds services that are used across different applications. And they're looking into ways to horizontally and also vertically separate these teams in a way that they don't step onto each other's shoes. And what we came up with is actually approach where each team gets its own Envoy. Like for instance, the application team one gets that team one Envoy. And they can actually route their traffic to their own space, which is in our case a Kubernetes workspace, a name space with some kind of predefined preloaded services, pre-deployed services. And the team can then route these requests to different, I don't know, feature branch deployments and more mature versions of the foundation space. And similarly, team two has its own Envoy and can do the same thing on the right side. And finally we have basically the main Envoy, the main application Envoy that really then points to the quality, the full gate past versions of the services in the different applications teams as well as the foundation space. So that has been really, really useful for us to really separate these development efforts to give every team kind of a safe space where they can experiment without again disrupting the work of other teams. So to conclude, I mean Envoy has been really, really useful for us in, for us sometimes really unexpected ways. But there are also challenges and let me actually talk first about the challenges. And the one thing I really want to mention first and foremost is the complexity of configuration. And I guess it's easily underestimated in the community that is so deeply steeped into Envoy. But for normal application developers, it can be really, really challenging to do even simple things in an Envoy configuration. And since everything goes through Envoy, it's also very easy to screw up things very quickly and very fundamentally. So for us it has been really a challenge to find a way to somehow regulate the way how we deal with our main Envoy configuration, which is by now more than 1200 lines of YAML code. And that's by itself is already a challenge. So we thought about different ways. So right now we're primarily looking into building a simple CLI that allows us to execute day-to-day commands like adding a service, removing a service, rerouting a service, really trying to avoid a normal developer really touching the main Envoy YAML file, which really becomes like a, I don't know, almost like a sacred cow in our development environment. So this is really the kind of main problem for us. And I'm really curious to see whether there's any other person or any other team that has similar issues or proposals to get around it. But the benefits are really, really very, very significant and very, very helpful for us. And for us, I think it's not really about, wow, Envoy is amazing and it's mind-bending. It's really all the small things coming together, well-thought-out things, whether it's the filters, whether it's the routing rules, the configurability, the extensibility, really the overall package makes it so valuable for us. And again, for us as a team in the financial services, it all starts and ends with security and dealing with our company internal security requirements. Envoy is just really priceless. And we're using really a lot of the filters that are available there. I mean, for us, it's first and foremost that I would say the trinity of security-related filters, the JOT validation, the JOT external authentication, which we're using very extensively and the external processing with our own ways to inject our tokens. And looking beyond my own team, which again is an application team, more into our broader company community, I think there's a huge potential that is very easily overlooked to standardize reverse proxy and security settings with proxies. Until recently, our official recommendation was to use Zool or Spring Cloud implementations and we have dozens, if not a three-digit number of different custom implementations. So I think using Envoy and having that level of extensibility where you can really put all the kind of app-specific or company-specific aspects is a very appealing pattern for us. And what we would like to see moving forward is a lot of things, but I really want to limit it to one point. There's a lot of discussion about Envoy as an ingress controller. And I mean, all that stuff that we do with the filters, we could easily put that directly into an Envoy ingress controller, only if filters are actually exposed. So I don't know the current state of discussion, whether that's something discussed as a possibility. I think for the Istio, there's a possibility to actually include filters, which is a little bit complicated, but still it's possible. So that would be something that would be very, very useful for us so that we don't need our own Envoy instance anymore and we could actually hook directly into the Envoy ingress controller. So yeah, that's all I have. Again, thank you so much for all your hard work. It's great for us. Again, it's not really at the forefront. I mean, Matt's saying it's Envoy getting boring. I don't get calls and Monday morning from my CTO, say Herman, how is it going with our reverse proxy strategy? It's really something much more a matter of sustainable engineering. And I think in that way it's been great for us. Thank you very much. Yes, we looked into WebAssembly. Yeah, so the question is about whether we have looked into WebAssembly. And yes, we have been looking into WebAssembly. For us, WebAssembly is a little bit problematic because the languages that we're using, which is mainly Java and Golang, are not that well supported. Particularly if you want to use libraries. So for us, the GRPC server approach was much more accessible and usable. You had a question? Yeah, so to be honest, we didn't really do detailed measurements. So it's more like, I would say, common sense if you can cut out that network layer in the IPC, in the inter-process communication. It will be faster, but I don't really have numbers. What I can say, though, is that we had initially kind of weird problems with the filter, with the external processing filter before using UDS. So we were running out of sockets, which was strange because you have that setting where you can set the socket threshold. And all these problems went completely away the moment we switched to UDS. And we're running this filter in production since half a year, more than half a year, without any hiccups. So we're super happy with that. Yeah. Oh, sorry. Yeah, so the question is, do we run our own control plane and how users push configurations to Envoy? That's exactly the kind of challenge that we have. So right now our Envoy is just a Kubernetes pod, and we're running a static config map configuration. And of course the changes of these configurations go through Git and PRs. But it is really the kind of weak spot, I would say, right now. And the more developers we are on board to our foundational layer, the more I think this problem is growing. So this is really something where we are looking for inspiration. Yes, exactly. 1200 lines of YAML code right now. And yeah, that's almost self-evident that this is not a good state. And we actually expect going from 100 to 200 developers in the next one or two years. So it will only grow. That's why we're thinking about custom CLI that might abstract things out, or maybe a custom CRD that allows us to abstract out and bring specific application semantics into that CRD. But yeah, it's still an open wound for us or a sore spot, definitely. Yeah, so the question is how do we push our configurations into our runtime system? Are we doing hard restarts, or are we just pushing the configuration? I mean, in our case, we just pushed the configuration into our Kubernetes, and we do a rolling update. But we don't use any of these kind of more fancy mechanisms. Yeah. Because we really started simple, I think in our team might be also a matter of skillsets and experience. So again, we're really an application team. And we had some folks who had some initial interest in Envoy and got into it. But I would say 95% of our developers don't have any knowledge or visibility. Envoy doesn't have much visibility to them. So yeah, we might look into it. But right now, our primary concern was to get something up and running quickly and implement our patterns. Over there. So the question is how does Envoy fit into our security posturing? Do we do anything with WAF or that matter? So for us, perimeter security is handled by our infrastructure. So we have a company internal layer that is managing all that stuff for any kind of application within JP Morgan. So the answer is no, we don't use Envoy at least to my knowledge. These guys are very secretive, so I don't actually know exactly what they're using. So for us, Envoy really comes into play again when it basically hits our Kubernetes cluster. Again, thank you very much. And it's great to be here.