 Alright, so I'm Jared Held, I'm from Fidelity and I'm here to kind of present a use case for, as an end user for Envoy to you guys today, and I'm here with Matt Turner from Tetra, who's going to talk a little bit about some of the help, some of the work they're doing as well with the Envoy Gateway. So yeah, first off, not endorsement of any vendors or anything along those lines. You know, standard disclaimer, make sure, you know, so yep, I am Jared Held, I'm a Principal Software Engineer, I've been working with Fidelity for the last 10 years. I actually started off with APIs when I first joined Fidelity as an intern, and since then I've just kind of, I moved around to a couple of business units, and I eventually ended up rejoining the API platform team once we came to kind of modernize the platform from, you know, the different systems that we've been using previously. Yeah, I guess I'm a software engineer at Tetra, I do a lot of community-facing open-source stuff, so I've been really interested in Envoy Gateway and Wazem and Mozero and those kind of projects, so I've been really interested in the ways that we've been helping Fidelity out, because a lot of it is based on the sort of open-source work, or is going to be, and that's what I'm going to talk about later. So yeah, there's, we've obviously gone on a long journey in our, you know, API platform in Fidelity from, like, you know, a variety of different, like, vendors and doing things homegrown, and, you know, we've always been, had a number of different objectives when we were going to be, like, building out an API platform, looking for specific features, you know, figuring out how to perform everything at scale, and obviously wanting to do this in an open-source-friendly manner. So Fidelity has always been really looking to innovate, as it's kind of gone through the years. I mean, it's, you know, from starting when they first purchased computers, you know, to when they, like, launched the home page for people to trade funds, to trade stocks and mutual funds and everything, and, you know, all the way to our cloud journey and deploying applications today. So in Fidelity, we've had a couple different vendors that we've used, you know, lower a couple different solutions. Some of it was on-prem, some of it was based in the cloud, like in VS As, but, you know, each different solution had a little bit of a different issue, right? It could be that, you know, that each business unit had its own gateway. It could be that, you know, it wasn't necessarily cloud-friendly. It could be a little bit difficult to upgrade them as a result, because it could be split out amongst different views, or, you know, there was a variety of different issues. We tried to solve that with VS As gateway. That solved some of the issues. We were able to deploy things automatically without having to go and manually configure things. It was a bit, it was friendlier, but it didn't really quite do everything that we needed to do. It didn't quite conform to what Fidelity was looking to do for requirements. So we started looking for a new solution, and we wanted to do that open source. There were a couple of different objectives that we had. We wanted it to be a relatively simple gateway. We didn't want anything that was really heavy-weight, and obviously, you know, on-way is going to be pretty lightweight. We wanted it to be highly resilient, in case any components failed, so that way we wouldn't have any downtime, and we'd be able to continue to serve our customers. And, you know, we wanted it to have some features that were already in built, like quota and security, things that we wouldn't have to redevelop on our own. Obviously, we only wanted, you know, best-class performance, and we wanted to be able to do things at a minimal cost. So that's where our whole Phenops optimization comes in and figuring out, okay, well, if we use Envoy, are we going to be able to run faster, and are we going to be able to run at a lower cost? And obviously, the answer was yes. Flexible. So with the filters types that Envoy provides, whether we'd be able to actually build out them to different features that are necessary to run our gateway. And then, obviously, we had some different languages that we wanted to support as well as we're going to be developing those different features. So that's where, you know, we started looking at a variety of different solutions. Things that were vendor-based, things that were open-source, things that were based in the cloud, things that were based on-prem, trying to figure out what's the proper solution. So we went through an evaluation about two years ago when we started on this whole journey to try to figure out what are we going to do to, you know, build a new gateway. And ultimately, we did arrive at Envoy. When we did the evaluation of Envoy, we realized that there's a lot going for it. I mean, there's obviously the observability and tracing that we've heard of today, which, you know, a lot of solutions didn't really have that level of depth when it came to that. The dynamic configuration via XDS, which we use in our own control plane, right, where you can configure that dynamically and in real time. The FinOps optimization, obviously, as I mentioned, in order to reduce our costs while we're doing all of these deployments, the multi-language filter support, so that way, you know, whether it's done in C++ or Lua or GoLang, whatever that may be, that we would be able to support that. We wanted something that was additionally close to the service match, right? Obviously, Envoy's very close to that. So that way, once we were building out this gateway, maybe we could do something with service match as well. REST, GRPC, WebSockets, Webhooks, we wanted support for all of those things. And additionally, we wanted to modernize our ingress through the edge to our internals. And obviously, we wanted to use this gateway on our internals as well. And obviously, as I mentioned, Envoy has top performance in class. So when we chose Envoy, we still had the problem of, well, okay, we've chosen a runtime for our gateway. What are we going to be doing for building out the rest of it? We needed to build out a management plane. We needed to build out a control plane. And that's what we proceeded to do over the last two years. So what you're seeing on your screen here is what we have something that we call internally as the stratum API platform. It provides a number of different things. As you can see, there's the multiple protocol support. And then we're also using a variety, we built out a variety of different tools to support this, as well as our own management UI. Where you can go in, you can self-service deploy an API and have that appear on the Envoy gateway itself. And then, based on OAS specs, and then additionally, configure your API to use certain filters. So we have a whole bunch of different tools and capabilities that we've built into the platform in order to best serve our customers' needs and for our development team's needs. So the idea behind this is that, regardless of the underlying technologies, that the API lifecycle stays the same across whatever product that you end up using, essentially, right? So we wanted to make sure that that was going to be stable no matter what we did. And one of the things that we did really consolidate on early on was to continue using OAS 3 contracts to ensure that during any deployment that every API uses an OAS 3 contract. Make sure that we have the full definition when we go to deploy. It's fully self-service, right? We have both platform managed and we're able to discover non-platform APIs as well. Things that aren't strictly running through a gateway today. And we're using domain modeling as well to make sure that we are able to classify those APIs appropriately. We also have the API tooling. As I've mentioned before, is a test studio that we built out. Then that allows us to go and test the APIs after they've been deployed. Different SDKs. We have a CLI that you can use to actually go and get information about what APIs are running and information like that. And then we have a variety of different other tools that we've built out, like breaking change validation where if you've deployed an API, you want to make sure that it's not going to go and break an existing deployment before you've actually deployed it. And as well as things like compliance APIs where we can ensure that when you are deploying something that you're not violating a security standard or a particular rule about which verbs are being used or whatever that may be. So as part of that, what you're seeing on the screen here is a simplified diagram of what we've actually built out. There's the management plane, which allows us to publish those APIs. We have an API publishing API. So that way, no matter which CICD solution you're using, you can go and call that API in order to publish that. That will then flow into a database that we have deployed. And you can additionally deploy that API via the stratum bui. So that way, you have both that it's fully self service, either via CICD or through the UI. From there, what we have is we have the XDS, they write that control plane. The control plane is consuming that information from the from an API, which links to the stratum database. So that way, we can actually on bootstrap, we can pull up that list of APIs that people have published. No matter, no matter whether that's in AWS or Azure or on prem or wherever that may be wherever our gateways are. Additionally, one of the things that we're also going to be doing is we're going to be integrating that XDS with Kafka. So that way, when somebody does publish an API, that it can just flow automatically in real time into the gateway. So that way, there's no like latency, no, no waiting time for after you publish your API for it to be able to actually you'd be able to call it. Once that goes into there, obviously, the envoy consumes that as well as all the other components that we have. So that way you are able to call your APIs through the gateway. So one of the other things that we wanted to do is really wanted to modernize how we're doing things too. So for both the edge and internally, we first started on our journey with the internal side of things, building out the gateways internally, and then additionally, making sure that everything, you know, function correctly, ran quickly, supporting all the different business use cases, adding the different authorization filters, you know, and supporting different protocols like the GRPC and the rest and everything like that. Then more recently, we've also been focusing on the edge modernization as well. As you can see on the left hand side, you know, we were coming through a firewall and then this has gateway. And then, you know, in the DMZ, there's a variety of different solutions that are lined up. That's, you know, how things worked previously. And what we're doing now is we're really deploying Envoy in the DMZ. So that way, you only have that singular ingress point, that singular, you know, that singular hole through the DMZ for you to be able to call through. So it increases security and also make sure that our infrastructure is that our infrastructure is all consolidated into one particular pattern. So as we're building out that gateway, we had a decision to make as to how we're going to be implementing these different filters, these different features that we were going to be needing on the gateway. And we did an evaluation when we were doing that. And we came across, you know, different filters that you saw in front of us. This by no means is all of them. But when we did our evaluation, we typically broke it down into, okay, well, we could build C++ filters, right? But as you may know, you know, it's can be difficult to find a, you know, talented C++ developer, it could be hard to, you know, retain them. And, you know, we additionally would have to build the Envoy binary, if we were going to be doing that. So we really wanted to try to avoid that, unless we were going to be doing something that required like the fastest speed that you could. So we then moved on to looking at kind of Lua and Wasm, and some of the external processing filters and everything along those lines. For a large majority of our filters, we actually use Lua because it's a very simple interface in order to program with that with the very simple hook into the actual Envoy. And we use it for some of our simpler filters. And additionally to make some call outs as well to external authorization APIs as well. More recently, we've been moving into the Xoff's filter type as well, for certain APIs as well, and then the external processing filter. We haven't gotten into Wasm too much yet, but we are evaluating that more recently, as that's matured a bit, and we're very excited about the future of Wasm on Envoy as well. One thing that we've done in Envoy is to ensure that we're able to meet the different use cases that the business units have, they might have different filter requirements. We deploy it in a decentralized model, the Envoy gateway. And what it is is that we have something that we call the orchestration filter. When people deploy their APIs, they can specify a particular tag essentially that lets the Envoy know which filter flow to actually execute. So it may require, one API may require validating a jot. One may be an opaque token, one may be an API key, and it may need to sign a jot later down the line or generate a different token. Who knows what it may be, right? There's a lot of different use cases out there. And we wanted to make sure that we were able to support that, and that's where this orchestration filter comes in. The orchestration filter is essentially a Lua filter that interprets that tag and then uses that in combination with composite filters in order to execute that those filters on a conditional basis. And that's effectively what that's doing. So it's really good for when there are common patterns and use cases and everything along those lines, right? But we did have a little bit of a problem and something that we're trying to solve. And what that is, is that it's not the best use case for when we have something it's a very API specific. So in the case of like a vendor or something along those lines, who needs to make a call into fidelity, they might have supply a different kind of token where they might do a different type of signature or something along those lines. So like simple use case here, there were four different APIs that were onboarded recently. One of them used IP IP wireless thing only one was only API keys. And then I had two different types of signature validation that were coming through as well. So one where the signature was just in the header, one where it was in a jot. So you need to validate the jot and the signature inside of the job. So you can have a wide variety of use cases. And that particular case, you have to build a filter for each one. And then you add that filter chain. So we're trying to figure out a way to solve that. But while we're trying to figure out a way to solve that, we also saw that there was a really big increase in performance. So what you're seeing on the screen here is based off compared to some of the other gateways that we had been using internally. Previously, we're really seeing about three to four times performance increase when we're using Envoy. And we know we can get faster on this as well. So what you're seeing is that when on our old gateway, when we're running this is on a single pod instance, the old gateway was only able to run about 300 TPS for Envoy were able to run 1000. Part of that's due to the reason that we use Lua filters. And as we transition over to bosom and other filter types, we do expect it to get faster as well. So one of the things that we're looking to do is the whole use case that I mentioned where you have different APIs and they require different authentication mechanisms that are very specific to those APIs is we wanted to try to make Envoy a bit more flexible. So what we've done is we're building on something that we're calling either policy engine or function engine, naming to be determined, right? But essentially what it allow you to do is that using a configuration based approach that you can apply different functions essentially in a side car that's going to be running that will allow you to do things like parse the, you know, grab grab that job out of the header, grab the the signature within the body of the job, and then be able to hash the the body to get that signature in addition to actually validating the job using something like the job authentication filter. So that way we can have that more flexible approach to be able to to support the very different APIs out there and then different vendors. So it's one of the things that we're really looking forward to doing and especially we're also hoping what this will do is for those of our developers within Fidelity who don't know Envoy at all that they can go and use these policies without ever having to actually touch Envoy or even know that it's really running underneath. Cool, thanks. So I'm just going to talk briefly about some of the work that TouchAinterverse are going to do in this area sort of together in the future and about a lot of the stuff that's happening in the sort of open source arena because a lot of this is going to be based on that. And I think it's I think it's pretty exciting which I'm a synchronized laptop series not the production values might not be might not be perfect. But yeah, I mean, I think we've done a lot of great work together so far, right? And a lot of these areas we were sort of ahead of the market. I mean, the external processing stuff was done before that was an official thing, right? It was kind of kind of clutched in, but it worked. But in a lot of those places, the market is now caught up and I think we can take advantage of that. And I think anybody else is in the same position can take advantage of that stuff too. So one of the first things I want to talk about is the Envoy Gateway project that you've probably heard sort of denyons talk about this morning. I think it's fair to say a lot of stuff that we've built was quite difficult or at least long winded to build. And of course, now it's got to be maintained. So I think one of the things that sort of fidelity could obviously do is move on to on-to-on gateway, right? So this is the control plane of that like the core of it is the go control plane library the same as you know, Monzo's custom service uses the same as Fidelity's stuff uses. So this is all you know, 100% of the sort of core of this is 100% tested code. And while, you know, Tetra is proud to have in a couple of their main contributors to Envoy Gateway, it is an open project with a big community backing in. So it's not going to go anywhere. And by adopting that, I think it's an obvious win where, you know, Fidelity won't have to be their own vendor, right? And I think the other big area of innovation that's really going to help everybody is, is Wazen, right? So you talked about all of the extensions we put into Envoy, making it into kind of a full featured API gateway kind of thing and all those functions that everybody needs like job validation like OIDC or like rate limiting like spotty validation the whole lot. And all of that stuff can be added to Envoy in various ways as we saw, right? But obviously Wazen is probably the way to do that going forwards. So I think doing that together, we can build this whole ecosystem of these plugins that can turn Envoy into a sort of fully featured API gateway project to rival the best commercial ones that I'm sure a lot of people here have, you know, used because they need the features but maybe had some bad experiences with. And if we manage the Envoy through the Envoy Gateway control plane, then that becomes really easy to do as well because Envoy Gateway implements the Gateway API, which is this, you know, sort of new emerging standard way to configure all kind, you know, all the ingress controllers you might ever have in Kubernetes. And we're working on extension to that. So we're working on making that API sort of modular and pluggable and having it be able to model all of these more advanced API gateway features. So as people in the community write these plugins with sort of an API gateway feature, maybe a certain type of signature validation, we can get a module, a component of the Gateway API that will model that and make it sort of configurable in a standard way, no more jumping into the old sort of annotations on the ingress object kind of thing. And I think, as you said, like Wazm just makes a lot of this, like a no brainer, that flow chart you have. It was perfectly valid, you know, sort of 12 months ago. But hopefully the answer going forward is going to be, you know, Wazm, Wazm is the answer. Right. And so all that really leaves is just the, you know, the language wars of what do we write and compile into Wazm. And, you know, your colleagues were too stupid to realize that Frost isn't the right answer all the time. If you don't like Rust then, you know, you're not only wrong, but you can use, there's a whole bunch of other things you can use. A tiny go is something I think is not on a lot of people's radars. It's a subset of the go language. We don't want to work on the compiler. That compiles down to Wazm quite nicely. So if you've got an infrastructure team that's really with Go, then that's a good thing to use. But, you know, so the jokes aside, I think it's important, I think the Wazm stuff addresses a bunch of real world concerns. Like it is a cool tech, it does have a lot of hype, but this isn't just a science experiment. We've seen how well the performance has increased already. You know, by moving to Lua, I think as you say, that's only going to be better when we move to Wazm. Especially if you're writing in a cool low level language with zero cost subtractions. It doesn't have a GC. But no, even, I mean, like Rust aside, it's another project that we've all been working on in communities, I think of Wazero, right, which is a Go lang, Wazm VM. So you can now host Wazm inside a Go program. So you could use that to write, you know, a test harness for an Envoy plugin or something, or I think maybe more relevantly, the external processing that you have talked about. So a lot of the stuff that Fidori does is actually going to have to stay in an external processing filter, because it needs to make it like a bunch of network calls of stuff that you just can't do from Wazm plugins inside Envoy. It needs to talk to IDPs and cert providers and stuff, I think, doesn't it? So some of that stuff is going to have to stay external. Obviously, Jarrod talked about building that system out to be this sort of DSL based thing that people who aren't familiar with Envoy can use. That, for example, could be written in, I'm riffing here on actually what your plan is, but like that, if anybody else is in a similar position, you could write something like that in Go lang and you could make that itself pluggable using Wazero, right, to host our sort of Wazm extensions into that. So, yeah, I think I have any more slides. I think that's basically all I was going to say. There's a bunch of stuff going on in the community around Envoy and Wazm, especially in Go lang. There's a bunch of design docs out there. There's a bunch of open source reflows. We'd love to see everybody get involved because there must be a whole lot of people who've got very similar requirements and I think we can all come together and build something awesome. So, yeah, if there are any questions at this time, I wouldn't say that we know explicitly that it's the Lua filters. I mean, it's, we chose Lua originally because you know it's how it's simple and it runs relatively fast. But I can't say that it's explicitly. We're still learning today, even as it is, exactly how to configure Envoy in the best ways. As everybody knows, Envoy can be a little complex. So I'm sure there's probably something that we've missed that we could use to you know help improve that performance as well. But we do know that Wazm really would help improve on that performance as well. So I wouldn't explicitly say it's the Lua filters. We've been very happy with them so far. Oh, and in case anybody can hear that question, basically the question was, do we know that it's explicitly the Lua filters that are slowing, keeping us at 1,000 TPS? So we've used a couple different filter types within Fidelity. I didn't really cover them all. But there's generally been a trend towards using Lua actually internally, simply because you know the, the how you integrate it is relatively easy. I mean it has just the Envoy on request, Envoy on response. It's a very simple, you know, it's plain as day, how you implement it, right? Additionally, I mean it can make the callouts that are necessary via the HTTP call, which is one of the wider use cases that we see when somebody's making a call to like an API that they may need to do for authorization. So and then, you know, just things that might be simpler as well within the Envoy as well. When it comes to other more complex use cases, like if we need to sign a jot, right, we do some re-signing of jots with specific AP, with specific keys essentially, to go to specific targets. And for that, what we did is before the external processing filter really came into its own, we built out our own C++ filter, which did that GRPC connection to a side car in order to do that kind of jot signing. So yeah, it's a mixture of Lua, some homegrown C++, and we're also going to be using the external processing filter today as well for like the policy engine and things like that, that we're going to be seeing a lot more use of the external processing filter. Thank you, gentlemen. Thank you.