 So, thanks for joining us today. Kevin and I will be talking about unleashing declarative data access with GraphQL in your service mesh. So, Lynn gave us an excellent introduction, but just to clarify who is who, yes, I'm Sai, and I'm Kevin, and we won't waste any more of your time. So, a little bit about who we are. We're solo.io. You might have attended some of our talks earlier today. We're a small startup, or not so small startup anymore in Cambridge that was started in 2017. And over the last five years, we've learned from helping many customers adopt service mesh, among them numerous Fortune 500 companies and two of the five top telecoms. Speaking of telecom companies, now that you know who we are, Kevin, do you mind introducing us to the audience? Sure, yes. So, you, the audience today are Wayne Telecom, a telecom company based in Gotham City. You're the largest provider in Gotham City and require highly available low latency communication with services from your web clients. So, stick with us as we lead you on our journey to spend less of your billions on cloud infrastructure and wasted developer time and revolutionize your communication stack. Trust me, it's going to be fun. So, where do we start, stand today? We're hemorrhaging money, plain and simple. So, we've got a mobile app. Our users are reporting slow loading times. Our developer teams are understaffed and front end teams are experiencing friction working with back end teams. Let's take a look at a couple potential solutions. So, just a concrete example here. This is going to be our pseudo, our REST API, a sample page from our billing application. This page loads in all the phone plans for the current user that are paying for. In order to get adequate information for this planned summary page, we must make multiple REST API calls per page. Your back ends are microservices split off into various business groups. And so, making multiple calls to microservices and aggregate in the back end is required to get all the data required for our mobile app. And so, note that we've got the user ID plan and some of those user IDs and plan IDs are used in other API calls. A lot of this won't be new, but just to highlight and make sure we're on the same page, one of the problems we're seeing in slow loading times, we can have request waterfalls. And so, the problem here is that we make a REST API call. We need to wait for that response on the client before we remake that request on the back end. Geographically, our origin server might be in the west coast and on the right coast, sorry, the eastern hemisphere, we might have our mobile app. And so, it takes 50 milliseconds to traverse the world. We take the response and we do it again. We've just doubled how long it takes to make our API call. So, BFF, this is a paradigm I'm sure many of us are already familiar with, but one of the potential solutions here is to develop a back end for front end. We can solve some of our latency issues by just putting an intermediary service in front of our back end microservices that's responsible for aggregating those results. And so, we send one single request, one specified for each client. So, our mobile app might have one back end for front end and our desktop app might have a different back end for front end. Both of those services will then go ahead and actually make requests within the origin servers in your network and then send it all back. Thus, we only pay the cost of traversing the world once, 45 milliseconds rather than twice. Just for a more complete picture here, we often have a CDN or load balancer. Also, we're trying to optimize our speed here. CDN is going to help us, but this isn't going to fix everything. We've obviously got to worry about HTTP puts and posts. We need to, if you have any mutation, then this isn't going to solve the problem for us. So, this is where GraphQL comes in. We can solve request waterfalls, large payloads, and improved developer efficiency with GraphQL. And that, by the way, is our logo or our little mascot glue. I try to make these little costumes for glue, and so this is just one of them, fat glue. So, yes, thank you. If our front end leverages GraphQL, it only has to create one GraphQL query to get all the data it needs. And hopefully the text is big enough for you to read, but that's one single GraphQL query, one HTTP request that the front end is going to make to a GraphQL server. And this will give us back all our data that the front end needs in one response. So, we can transform our entire communication stack within the back end to use GraphQL and receive two major benefits. First, GraphQL returns exactly what we need and nothing more. This means that we can ask for exactly what data we want from the GraphQL back end and get back exactly what we asked for. Nothing more, nothing less. And in addition, we can fetch data from across different resources from a single query and have all that data sort of aggregated by the GraphQL server in one response. This gets rid of, obviously, the problem of having to make multiple requests for one single piece of data. And second, or I guess third, GraphQL provides first class support for a type schema. We are familiar with current OpenAPI, GRPC proto, like specifications for APIs to see what data is available from back end service. But GraphQL has first class support for a type schema. So, in our case, we have that query that our mobile application is sending to our GraphQL server. On the right, we have our, sorry, on the left, we have our GraphQL query. And on the right, we have the GraphQL type schema, which aligns perfectly to the query we're sending. So, I'm going to go ahead and just browse through the query here and you'll see that every part of the query aligns with a specific type from the GraphQL schema. So, what's the point of this? Well, the GraphQL schema can now be accessed with amazing developer tools such as GraphQL Playground here. And developers can now browse the schema as easily as browsing the internet. You can ask for whatever data you want from the schema. And you can get it via a great UI like this. And when it comes to just baking it into your application, you just have to copy and paste the query into your application. So, Kevin, now that we've seen the numerous benefits of GraphQL, where do we stand with Wayne Telecom? Yeah, so just, yeah, we grief graph, brief GraphQL intro aside. Now let's just take a zoom back to where we are with Wayne Telecom. You know, we've implemented a GraphQL server. We have on the left here, a sample set of queries. And on the right, we have a sample resolver. And so resolvers usually defined, you know, in code. And here we've just kind of hard coded it. But just reminder, you know, we have some active teams that are responsible for maintaining these resolvers, these back end services, these service owners are creating those resolvers. A platform team is now going to own that gateway, that GraphQL gateway. But our front end devs are happy and productive. You know, our front end is snappy, it loads quickly, you know, their developer experience is improved. You know, we've removed the friction of working between back end and front end teams by basically doing a back end for front end via GraphQL. So front end, we're definitely happy. Continuing forward, this is a service mesh conference. We haven't mentioned the word service mesh yet, service mesh con. Yeah, so we're a telecom industry. We see this commonly with our customers, we're heavily regulated. This means that we require things like zero trust networking. So we just picked the de facto use case here. We need MTLS between all of our services. We also want things rate limiting caching, you know, rate limiting picture on the right. So now this is our current Wayne Telecom infrastructure. We've got our, you know, single cluster view Kubernetes cluster with Istio installed. We have an Istio ingress gateway where our traffic enters. From there we forward that to our GraphQL server, which is then our back end for front end and talks to all of our back end services as required. And then you'll also note that for example, the users and plan service might talk directly and that's also encrypted via MTLS via Istio. So this is our current picture of things. So now I'd like to take a second and zoom in on those first two aspects. So we've got the ingress gateway on boy proxy for Istio. And then we have our GraphQL server. So you know, and then we talk to our back end microservices. And so again, this is the layer where we implement certain policies like external off rate limiting, WAF caching, all these are just on boy filters on the filter chain. Again, we're just securing our network at the edge. And we do that because we have to. Yeah, we don't want this to live in the GraphQL server, but we do want the benefits of GraphQL or any back end for front end implementation. Now just reminding everyone, it's maybe it's already apparent to some of you here, but GraphQL is a single endpoint behind our proxy here. And so it means that we need to be identity aware. And so there's a couple of, you know, outcomes here. The first one is that we need to be aware of things like authorization, right? Like if we're in single endpoint, we still need to figure out who is querying that endpoint. There's two, two different ways that we could think about solving this kind of problem. On the left, we could just delegate the problem. So we could just take the the request headers, for example, like an authorization header, and just pass it through to our back end service and let our back end service project if we're not authorized. You know, that works in many cases, but not all. And the other solution is to go ahead and code it into our server. And so we also see examples, you know, use cases where our customers go ahead and they instrument their GraphQL server with identity aware logic like this, that's enforcing authentication. It moves beyond just platform concerns like authentication, though, you could also have to worry about things like circuit breaking and failover. And some of those things specifically circuit breaking and failover we really kind of have to do on the node that serves GraphQL, right? Like if we want to actively figure out which endpoints we can reach from where we live, that concern can't be delegated to hypothetically another proxy behind GraphQL, unless that server lives co located on that node. So at some level or some point, we have to re-instrument some platform concerns in our GraphQL server. So moving forward, we also have, you know, we've already invested in our GraphQL infrastructure here to help our front end teams. We've seen that the developer experience has improved. We've already invested in infrastructure. It feels natural. Why might we want to leverage it? Certainly, we don't need to worry about across waterfalls within our internal network, but we can still work, you know, decrease bandwidth if we wanted to or have an improved developer experience. And we've already invested in our GraphQL infrastructure, so it feels free to leverage. So you see here the user service might or could just talk to the GraphQL gateway and then make requests that get served. Not that you have to, but it's just an option. So just a quick checkpoint of where we're at right now with Wayne Telecom. We've solved a bunch of problems, created a couple as well. So far we've increased front end dev efficiency. We've reduced overfetching. We've naturally implemented a back end for front end. We've got Istio to enforce zero trust networking. And we've got, of course, we've gotten rid of crime in Gotham City, but also have we, we've created a couple more problems. You know, our platform team has to maintain a new GraphQL gateway. You know, each app developer team that wants to support this GraphQL gateway has to implement and maintain their own set of resolvers. They have to instrument some platform concerns into those resolvers to ensure that we have reliable service. That's pretty much the summer. So yeah, we've gone ahead and solved a bunch of problems. You know, we've made our front end developers happy. But there's a bunch of other problems. You know, the Joker is still at large. And most importantly, the back end team is unhappy. So there must be a better way to solve for our back end team as well as our front end team. So this is our current architecture. And we see that the proxy, the network proxy is a separate deployment, a separate pod then our GraphQL server. And of course, this means that we have to manage two different deployments. The different teams must manage different pods. But what if we went ahead and merged those two? You might have seen this coming, given that this was in Edith's slide earlier. But now we're proposing a new model proxy is a GraphQL server. So essentially, we're merging the responsibilities of the platform and the application networking team into a single component. So here's a quick summary of the benefits. First, it completely eliminates the developmental and operational expense of maintaining a separate application deployment for your GraphQL server. There's no additional network caught between the proxy and the GraphQL server, because now the GraphQL server is just in memory in the proxy. And the GraphQL capabilities are based on declarative configuration, which you now use to manage Istio Envoy, just like the rest of your cloud native infrastructure. So this means you're fully compatible with CI CD workflows and GitOps. So we can also go ahead and leverage the existing capabilities that Solo has provided within Envoy. So there's no need to integrate with third party libraries to create resolvers or GraphQL specific policies. Now we've mentioned these policies multiple times. WAF, web application firewall, caching, external auth and rate limiting. And now that's all built into your proxy with GraphQL. So those are a couple of the benefits. But you might have noticed that we secretly converted these existing services to GraphQL without without doing anything. It's not magic though. It seems like magic, but it's not. Each of these services exposes a specific specification. For example, REST exposes OpenAPI, GRPC exposes protobufs via reflection, and SOAP exposes WisDL. So we can use each of these specifications to transform those specs into GraphQL schemas automatically. I'm going to go back a slide because I missed one point. There is an existing library which does this. That's completely open source in JavaScript called GraphQL Mesh. And the folks over at the Guild have implemented that and it's a great project. However, now that we're baking the GraphQL server into the proxy, why not also bake this logic into the proxy as well? This discovery logic. By putting a side card containing this discovery and transition logic next to each service, we're able to transform incoming GraphQL requests to the request that each application understand in its own protocol. Effectively, we've now converted all of our services into GraphQL services by just including them in the Mesh. So we're going to zoom out a little bit and we're going to go back to the GraphQL, the back end for front end example. So the side card gateway model opens up a bunch of new possibilities and one of them is for a more efficient back end for front end architecture. So we can see we have three back end for front ends here, but these are actually virtual back end for front ends. They live within the proxy on different routes. So they're just one deployment, still only one thing that you have to manage. And each of these can request different services, make GraphQL requests to each of those services and do their own application logic. Yep. So not only you know, we've, is this more resource efficient, but it's also going to be easier for us to use as a maintainer of, as a maintainer of our GraphQL platform or gateway here. And so we've got pictured here is some Istio declarative configuration, again, going back to Service MeshCon. Well, Envoy is, you know, the service mesh proxy for Istio. We get to leverage the fact that it's built in Istio to do things that we couldn't do before. And so for example, completely for free on the left here, you'll see a typical YAML, this YAML itself, it's not particularly interesting. But the interesting part is that it's the circuit breaking config that you might have already applied to certain destinations in your cluster. Because we've built a GraphQL server in Envoy and C plus plus on the filter chain, and we reuse Envoy's cluster routing, like routing implementation, then we get the circuit breaking knowledge aware for free. So if certain clusters are already unhealthy, Envoy's already aware, we're already rerouting them. No user configuration or intelligence in your GraphQL server required. Further, we can extend this to things that are still easier than they used to be, but not completely free. On the right, we might have a sample of authorization policy. You can apply, you know, authorization to certain external off on certain destinations going towards the back end. Because again, this is built in Envoy, we could build an eternal listener to ourselves, and then we could rerun that authentication logic without traversing the network is stacked without going down and out all in user space and just forwarded along. We can do that more effectively or quickly than we could before with a separate, separate deployment. Again, so we're just back to our original solution. We have our old programmatic configuration for resolvers. This is again, we're instrumenting platform concerns into our resolver. In this case, authentication, but it's not just limited to authentication. I'm sure some of you have written GraphQL resolvers before yourself. If you've done that, you've also used JavaScript libraries, I'm sure like data loader to handle batching and caching. Those kinds of things all end up in your resolvers and make them perhaps verbose, complex, and ideally you want them thin, but it and necessarily these concerns creep in like to compare this to what we have today, what we'd like to propose for our GraphQL configuration. Our resolver here config is entirely declarative. So you can see on the bottom, we have our schema definition. I hope it's big enough. You have the your GraphQL query and then the information behind that query and a custom directive note the app resolve with a name query plans. And then we have an executable schema, which defines how to actually resolve that field. So we're saying we have query to get plans. And we want to if we want to resolve it, we make a rest request with that header, with that path to that destination. It's like a custom string interpolation language that we had come up with. But you could imagine, you know, in that args, for example, is grabbing from the GraphQL arguments. If you're familiar with running the GraphQL resolver, you could also grab from the parent object or or other primitives that you're used to seeing. So you can get dynamic information at runtime as part of your GraphQL resolution. But the important part is that it's declarative and it's very simple. And we haven't instrumented a ton of platform concerns into our resolver configuration. So now I would just go ahead and put it all together. Again, on the top right, you'll see the YAML that we just looked at. This YAML could have been generated per service. So rather than being defined by a user, we can discover and create this from the back end service that fronted it from the WSTL or the open API or the swagger. We can generate this using logic very similar to the GraphQL mesh and provide that, you know, via the control plane, Istio service would have what have you. And the bottom, we just have another one of those. But now that we have them per service, we can actually stitch them together and create like a super graph. And so I'm not sure how many of you have heard of schema stitching in GraphQL, but we can go ahead and by referencing on the left, we reference or use a label selector, different schemas. And then we can do a type merge. If there's no conflicting types, everything just works on the first pass or we provide some kind of YAML type merging configuration. But you can create like an API gateway that's like a super set of and allows you to make queries against both types or across both schemas. And further than that, we can also, like I mentioned, there's data loader caching improvements. That's also just built in by default because we can make intelligent assumptions on the YAML. We know which queries will return similar responses and optimize those out without extra user interaction. So now we're going to zoom out even more. I know we're throwing a lot of information at you, but let's take a look at this from a high level 10,000 foot view of your clusters and your cloud architecture. So we have multiple clusters, multiple environments, dev staging prod, and note that each microservice here issues GraphQL requests to its local sidecar. And those requests will be resolved from the side car. Thus, our microservice thinks about the data that it wants rather than the service that it wants from or what service to request the data from. The service dependencies are now documented via declarative configuration YAML with all the change history benefits if you were to use GitOps rather than coded into services themselves. So we're no longer writing programmatic resolvers and having to enforce programmatic policies on developers. Instead, we're enforcing it via declarative configuration. And also we no longer have to scale up the GraphQL gateway to enable more mass services to leverage GraphQL. If this is all controlled by GitOps, we can promote changes from dev to staging to prod environments easily. So just going to overview the benefits we've seen from using GraphQL in the service mesh. The biggest selling point is obviously developer efficiency, particularly in the request to service to service request flow. By merging the proxy and GraphQL server together, as well as including GraphQL in your sidecar for your for your service mesh from the get go, there are less deployments to manage and fewer points of failure in your GraphQL system. Putting GraphQL directly into the gateway or sidecar proxy also reduces an additional network hop. We've gone over the declarative configuration and all the GitOps benefits as well as the benefits of just using declarative configuration in general. And you get the GraphQL type type safety that comes with using GraphQL as opposed to a lot of different specifications and protocols like REST. And resolvers are simple to define in the declarative configuration and reason about. So most platform concerns have been moved back to the proxy and the resolver is purely focused on fetching data and accessing data for schema field and type relationships. So also an interest of time the demo going to go quickly but this chart should look very familiar. Again, we've resolved all the concerns from the first half of the presentation. We've certainly if not solved made a lot easier the concerns that we had created by automating a lot of the required processes on the new platform and backend teams by discovering those schemas by allowing you to merge them together with a stitch schema and doing both that automatically on most of you the click of a button if you have open API WSTL or swagger exposed. We've made all those problems to be created also solved. Awesome. So it's demo time. Let's see how kind the demo gods are to us today. Yep. Yeah. So OK, let's spring the terminal to left left to the left. Why it's not working? Does that work? Can you see the term of the screen there? No. One second. How do I marry? Why don't you go to the left and like minimize it? Let's bring this desktop over. Does that work? Why is that working? I'm just trying to drag over a window. Why don't you let this click us? OK. Now you should be able to drag it over. Let's see. Perfect. Yeah. OK, so I'm going to be typing like this, but we'll see how fast I can type. So in our cluster, we have three services, the products, reviews and users service services, which are gRPC services and each of these are exposed reflection, a reflection endpoint. So we can use gRPC curl to which is a tool to essentially is that better? A little bit. The left side is pretty small, isn't it? Oh, yeah. Perfect. Yeah. So these are gRPC services and you're going to have to trust me for that. Just in the interest of time, we have four minutes left and we're going to want to create GraphQL services from these gRPC services without even touching application code. So how are we going to do that? I'm actually going to open up canines on the left here to show the cluster once again. Oh. OK, it's a little truncated, but we'll live with it. So we have a discovery deployment, which is going to essentially do our discovery for us, our GraphQL discovery, which is the magic I was talking about earlier, where we just hit a button and the GraphQL API should show up on the right. And hitting the button there was just scaling the deployment to one. We just enabled it. The controller there. And there we go. OK, well, there they are. So these GraphQL APIs were just generated when we created the discovery deployment. And these are fully flushed GraphQL schemas that are generated from our protobuf types. This. So this is kind of what the schema looks like with resolver configuration. This is the declarative configuration we were talking about earlier. And we can query these schemas. You want to report forward? Yes. Where's the I'm going to go ahead and pour forward the gateway proxy that's doing the GraphQL logic. Typing like this is a lot harder than I thought it would be. OK, well, there there it is. So here's a query that we can issue to the user service, which gives us information about the user and we can see that we're creating a GraphQL query which gets translated into a gRPC call and we get back a gRPC response, which once again we're translating to a GraphQL response. And none of the application developers are really doing this. This is all happening in the GraphQL gateway. And we can also do the same thing with our product service where we're issuing a GraphQL query, getting a GraphQL response. And once again, I want to emphasize the fact that we're not changing any application code here. We are doing translation in the gateway that's translating our GraphQL requests to gRPC and once again, response back to GraphQL. So now we want to stitch these two services together, our user service and our product service. You'll notice that the product service exposes the seller field with a username, but we might want more information about the seller, like their first name, the last name, what country they live in. But this schema doesn't expose that if it did, you would see it in the autocomplete. So instead, we've now created a GraphQL stitch schema. I'll try and zoom in a little more there so you can see the endpoint is a little bit different. Essentially, we're serving a different endpoint. We're serving a stitched GraphQL schema. And what we can do now is we can actually query across schemas by stitching them together. And so we've now created a query that queries the products service as well as the user service. And something really interesting that we can do here is now if you remember, the product service only gave us information about the username. Well, we can also query the product service for the first name. Which comes from the user service as well as the last name. Anything that the user service exposes can now be accessed from the product service. And this is the magic of schema stitching. So yeah, that's basically the demo. Let's get back to the slides. And yeah, that's basically our the content of our talk. Yeah. And we got one minute. So if there's any quick questions, this would be the moment. Great job. Love the live demo. Let's give them a round of applause. One question. We only have time to take one question. If you can walk to the middle and just speak out to your question, that will be awesome. Thank you. How would you do payload validation when you have some different services to reach when you have such a state schema together? I'm assuming you're saying we were to have a service that's a GRPC service, for example. And it sends a payload that's not completely. As in checking the schema of what incoming request you're getting for your particular service. Would that fit in? Yeah, so you're saying so GraphQL already has this built in like you can't issue a request to a GraphQL server that doesn't exist within the schema. You can't ask for data that doesn't exist within the GraphQL. Yeah, we send like a GraphQL query or mutation and it just to be spec compliant. And so when we're working to the GraphQL level, all of the types are GraphQL types. The request that you send is a GraphQL query. It has a specific shape that can be validated and that just gets translated via resolver into how you resolve each field. So that's really part of the GraphQL spec. If I understand the question correctly. And if, for example, if you're upstream, if your service falls out of sync with the GraphQL schema that we generated, this can be done automatically. As you saw, it was kind of created on the fly. Every time your service changes, we discover a more updated version of your GraphQL schema. So it always stays in sync. Go on. Thank you.