 Okay. Thank you, Candace. I appreciate that introduction. Good day, everyone. In this session, we're going to talk about how you can unleash declarative configuration with GraphQL to solve some of your most challenging application networking problems. So, but just wait a minute, you say, how do you even know what my problems are? Well, I know this because for the next 45 minutes, you, the audience are going to be Wayne Telecom. I'm a Telecom services provider based in Gotham City. You're the largest provider in Gotham City and you require highly available low latency communication with your services from both mobile and web clients. Together we're going to embark on this journey that will enable you to spend less of your billions on cloud infrastructure and wasted developer time and will revolutionize your communication stack in the process. Trust me, it's going to be fun. First, just a word about who I am. My name is Jim Barton. I am a field engineer with solo.io in the US. My career in the enterprise computing space spans three decades. I've been with solo for the past two plus years. And prior to solo I was an architect with companies like Amazon and Red Hat and Zappos.com. Who is solo.io? Solo is a company that was born in the cloud that specializes in helping enterprises manage the complexities of application networking in a cloud native context. We do this via community leadership and strategic open source projects like Istio Service Mesh, Envoy Proxy, and of course GraphQL. And we offer an enterprise grade platform based on those projects. All right, so now let's turn our attention back to your problems at Wayne Telecom. So you are hemorrhaging money. Your mobile user experience is absolutely dreadful. You're spending way too much money trying to band aid problems. And your development and operation teams are both really unhappy. So let's drill deeper into why these problems exist and then explore some potential solutions. So here's a sample page from your billing web application. This page loads in all the phone plans that the current user is paying for. In order to get adequate information for this plan summary page, we have to make multiple REST API calls per item on the page. Your back ends live in microservices that are split across various business groups. So fetching data for this summary page currently requires making multiple calls to various microservices and then aggregating the responses and sending it back to the front end. The problem with making these multiple REST API calls per page is illustrated here. So because you have a multinational business model, you often get traffic from across borders and even from the other side of the globe. So request round trips take really significant time, leading to a slow user experience that's exacerbated by bad mobile signals and slow connections. So we can solve some of our latency issues by using a back end for front end architectural pattern. We can create an additional back end service that exposes aggregate endpoints so that the mobile app only has to make a single API call to the BFF service which will then aggregate the responses from the individual back end services. In addition, we can apply some further optimizations like content delivery network level caching and load balancing as well to increase the performance of our system. It won't help us so much with HTTP posts and puts, but in terms of get operations, it actually could work reasonably well. So it's not a complete solution for the way apps are built in distributed multi-tenant organizations like Wayne Telecom. So we need to explore some other optimizations that so we can free up our organization to again fight crime in Gotham City. Alright, so this is where we're going to introduce GraphQL into the mix. We can solve problems like request waterfalls and large payloads and even improve developer efficiency by utilizing GraphQL. So if our front end leverages GraphQL, then it only has to issue one query to a GraphQL server to get all the data it needs. So what you see on this slide represents a single GraphQL query that returns all the data and only the data that the front end needs in a single HTTP response. This allows us to transform the back end to use GraphQL. And that's going to offer us some really significant benefits. Number one, now all of our queries return exactly the data we need and nothing more. So with GraphQL, we are no longer at the mercy of bloated service interfaces returning every piece of data that a potential client might need. Instead, we can specify precisely the data we want. And that's all we get back. Number two, GraphQL allows us to use a single query to retrieve data that lives in multiple back end resources. So no longer are we required to make multiple service calls and splice the data together in our client. With a single unified schema, we only need a single query. And we allow the GraphQL server to manage the request dispatch and aggregation of response data to match our query. Number three, and this is a really important one, GraphQL offers first class support for a type schema. You can almost think of the GraphQL schema in the same way you think about an open API or swagger specification, only this is for our GraphQL endpoint. You can see that clearly by kind of following along with this example with the GraphQL query that our application issues on the left and the corresponding schema definition on the right. So if you watch the bolded and underlined text on both the query and the schema, you'll see exactly how each individual part of the query on the left precisely matches the schema on the right. And so this type schema opens up a whole new world of helpful tools for Wayne telecom developers. Now you can leverage tools like GraphQL playground as shown on this slide. Front end developers can explore schema they can build up their queries interactively, and then they can easily incorporate those queries into their application code. Okay, so where do we stand now with Wayne telecom so let's review, we have deployed a GraphQL server. We have our application teams that have added GraphQL schemas and resolvers to their services, and then those schemas and resolvers have been added to the GraphQL gateway, and there's a platform team that will of course manage this new gateway. So the good news is that our front end teams are now happier and more productive. The developer experience has improved tremendously. We've removed a lot of the friction that existed between back in and front end teams by using this back in for front end pattern to create an efficient GraphQL interface between the two sets of teams and our mobile app is now much more efficient thanks to these changes that we've made. So, on the front end, we delivered some massive improvements, but there are still challenges that remain on the back end so we operate in the telco industry, which means that we're highly regulated. We need to implement things like zero trust architectures throughout our infrastructure, and we need to worry about capabilities like using MTLS to secure the communication among all of our service not not only between our new GraphQL gateway and back end services, but also among the back end services themselves. Well, that's a pretty lift pretty heavy lift for an enterprise that operates at the scale of Wayne telecom. So we adopt Istio as a service mesh. This is going to allow us to externalize general platform features like MTLS and rate limiting and caching away from the applications themselves and absorb that into our service mesh infrastructure. After making the changes to adopt GraphQL and Istio as a service mesh, our revised Wayne telecom infrastructure looks something like this diagram. We're showing a single Kubernetes cluster with Istio installed. We have an Istio ingress gateway where our traffic enters the mesh. And from their requests are forwarded to our GraphQL server, which acts as our back in for front end. It manages the dispatching of requests to our back end services. It also aggregates responses and then returns those to our service clients and of course, all of the internal communication within our mesh is now encrypted using MTLS. So that represents the current state of our enterprise architecture. Now, next we're going to zoom in on two of the components in this architecture, the ingress gateway slash envoy proxy, and then the GraphQL server so this diagram represents pretty common configuration we see with customers who are deploying GraphQL today. Application teams pick up a GraphQL framework and they write some code to resolve multiple upstream data sources into a GraphQL API. They integrate with a number of libraries in that process and they produce an application deployment that's then managed by a platform team. The platform team owns the operational responsibility for the health and availability of this deployment, but the story doesn't end there. The GraphQL server then exports an API and it needs to be protected so that so the platform team fronts that GraphQL API with a proxy. So the proxy in this architecture is managed as cloud native infrastructure, it's configured declaratively compatible with modern Kubernetes best practices, and based on the leading proxy technology in the market in envoy. The GraphQL server however is a separate deployment that requires code changes to modify and evolve behavior. It also represents an extra network hop in our data path and additional operational overhead to maintain it. There's nothing fundamentally wrong with this approach, but we feel there's a more efficient way to support GraphQL in an application architecture to simplify the lives of app dev and platform engineering teams alike. Another question that arises from this architecture is where best to handle platform concerns like authentication and authorization. Ideally, we'd like to separate these concerns from the application itself and handle them at the gateway proxy level just as we would if this were say an open API interface that we're talking about. But with the GraphQL server separated from the gateway proxy we're forced to pass some of these concerns through and handle them outside the proxy layer so we see GraphQL users handle this in a couple of different ways. The left side of this slide reflects an approach where we simply take the auth related context from the request and just delegate off NZ decisions to the back end services. The right side illustrates another approach where we take that same context and instrument identity aware off code directly into the GraphQL server. Neither of these approaches is ideal. What we'd like to see is the ability to offload these concerns from imperative application code and handle them in a declarative policy driven fashion. So, let's take a step back and review where we are in the Wayne telecom journey. We've solved a number of significant problems already we've increased our front end dev efficiency or we've reduced data over fetching and bandwidth requirements, and we've implemented a back in for front end architectural pattern, all using GraphQL. Plus, we've adopted service mesh technology using Istio to lay the foundation of our zero trust networking architecture. We rid Gotham City of crime yet. It might be a little early to declare victory in that battle but but we'll keep we'll keep working on it. But, as we all know, in engineering decisions are rarely 100% positive, all decisions represent trade off. So let's consider some of the downsides of the changes that we've adopted so far. Number one, we've added a major new moving part in our server side infrastructure, a dedicated GraphQL server, and our platform team must be responsible for owning and maintaining that. We've also added new responsibilities for our application teams in that they need to learn GraphQL and maintain the associated schema and resolvers for their services. Excuse me. And as we explained just a moment ago, they also need to reimplement some platform related features like authentication and authorization to account for the fact that they are now using GraphQL as a core component of their application strategy. So, as we consider these trade offs, we want to maintain the good things that we've achieved with GraphQL and Istio, but we want to do that without adding the burden of new server types to maintain, and with less of a burden on the application teams whose services need to participate in that mesh. And so we conclude, Robin, there must be a better way. All right, so toward that objective of finding a better way, let's take a closer look at both our Envoy proxy that serves as our Istio Ingress Gateway and also our new GraphQL server. So this GraphQL server is its own separate deployment that requires code changes to modify and devolve behavior. It also represents an extra network hop and an extra moving part to maintain in our server portfolio. So again, there's nothing fundamentally wrong with this approach, but we feel there's a more elegant and efficient way to support GraphQL in our application architecture. And this approach will simplify the lives of both our application and our platform engineering teams. So let's begin our exploration for a better way by asking ourselves a few questions. Number one, what if GraphQL APIs didn't require dedicated servers? Number two, what if you could reuse existing API gateways to serve up GraphQL? So in other words, GraphQL would be just another supported type of API, much like REST or OpenAPI or SOAP. What if you could use best DevOps practices like declarative configuration in building out these GraphQL APIs? And finally, what if you could leverage existing API contracts to build GraphQL configuration? In other words, what if you could construct those GraphQL APIs without the underlying services even being aware of GraphQL that's being used at the gateway level? So where these questions lead us is to a simplified architecture that consolidates GraphQL and application networking responsibilities into a single component. What we're proposing is to update our Envoy proxy fleet so that it can function as a GraphQL server. So just a quick summary of the benefits. This is going to eliminate the deployment and operational expense of maintaining a separate GraphQL server application deployment. And by removing the additional network hop to this separate GraphQL server will improve performance and even resilience because we're going to avoid an extra failure mode on the data path. Finally, these GraphQL capabilities are based on declarative configuration and not imperative code. In other words, it's just like the rest of your cloud native infrastructure. And so it's fully compatible with CI CD and GitOps workflows. Alright, so how does all this work? Well, by leveraging the existing capabilities that solo.io provides via a GraphQL Envoy filter, there's no longer a need to integrate with third party libraries to create resolvers that run inside GraphQL specific servers. With some simple configuration changes, we can leverage existing gateway an existing gateway proxy to add policy driven authentication and authorization. Plus, we get services like rate limiting response caching and WAF rules at the edges of your application network. All of these capabilities now are driven by declarative policies, not imperative code and that's far superior to solutions where you basically have to implement these concerns yourself. All right, so let me present you with a little riddle here. You may have noticed earlier that I kind of waved my hands and magically transformed our existing services into GraphQL services. So does this mean that each app team now must go off and implement GraphQL in their services to make them GraphQL aware. We appeared on a panel discussion with a Netflix engineer who said that the process of enabling pre existing back in services for GraphQL was one of the most difficult passages of Netflix's GraphQL journey. So we'd like to avoid that pain as much as possible. So what if we could do this, what if we could, what if we could transform these existing back in services into GraphQL without touching the application code. In fact, there is a popular open source library that already exists that does this very thing in JavaScript is called GraphQL mesh and it's a fine project. It leverages existing service specs to generate code that facilitates the conversion of pre existing services into GraphQL aware services. However, we can avoid even the adoption of tools like that, since that discovery capability has already been integrated directly into our existing proxy fleet. The secret sauce is to leverage these existing interface specifications that are already in place on the back end services so open API, swagger for rest services, proto buffs via reflection for GRPC services, and so on and so forth. By putting a sidecar containing this discovery and translation logic next to each service, we are able to translate incoming GraphQL requests into requests that the application understands in its own native protocol. Effectively, we've now converted all of our services that we want to include in the graph into GraphQL services, just by including them in our SEO service mesh. So, now let's zoom back and consider our new deployment architecture with GraphQL, Istio, and the glue platform. This new model opens up new architectural possibilities which not only enable more efficient traffic handling, but also better separation of concerns within our service deployments. For example, we can see from this diagram that we have three back in for front end deployments here for billing sales and HR. But these can now be strictly virtual deployments, virtual services from a deployment standpoint they all live within our gateway envoy proxy, just on different request routing paths. So, the infrastructure is not only more efficient at runtime, but it's also much cleaner from a design standpoint. So, not only is this new approach more resource efficient, but it's also easier to administer. You may recall that with our original design we were forced to write imperative code to handle concerns like authentication and authorization. Now with GraphQL embedded into our proxy, the kinds of declarative Istio policies that are shown on this slide for things like off and failover and circuit breaking and rate limiting again work exactly as we expect. There is no separate GraphQL server instance to gum up the works anymore. So, we can now happily move from imperative code like this to implement platform concerns. We can move from there to declarative configuration like this GraphQL API custom resource to specify how our APIs should operate. Excuse me. This will allow us to replace programmatic GraphQL servers and resolvers with declarative configuration propagated to envoy by the control plane. We can also use GraphQL custom directed such as the at sign resolve directive to link resolver configuration with particular fields in the schema and what's more, this configuration can be discovered by the control plane from your existing server interfaces, essentially writing the GraphQL server for you. So, we can discover and create GraphQL API schema on a per service basis, leveraging the interface contracts that those services already published but using that discovery capability is not required we can also build and maintain our own schema if we prefer that approach. We can also stitch individual schemas together whether discovered or built by hand into a unified super graph so that allows clients to think just about the data they need from the graph, and they don't have to worry anymore about what services need to be called in order to retrieve that data. The embedded GraphQL proxy filter will handle all of those details for them. All right, so if we review where we are, in addition to resolving the initial concerns that we did in the first part of this talk, we've also now addressed some of these, these new issues. So, now we no longer have to have our platform team maintaining a separate GraphQL gateway. So, our app dev teams don't necessarily have to learn GraphQL to the extent that they would have with our initial approach don't even have to maintain their own schemas and resolvers if that makes sense. And also, they no longer have to reimplement platform concerns using imperative code. All right, so that's good. And that's enough talk about adopting a declarative approach to GraphQL. Let's switch over and take a look at how that operates in action. All right, so we will switch over here so I'm doing this demonstration today on the instruct platform which is really nice for for doing demonstrations you can see I have a terminal here on the left to have have some kind of explanatory self guided text on the right this is. This is a tool we use commonly at solo to deliver to deliver technical content and even longer form workshops where you get a chance to get hands on with with the relevant technology so I'm just going to walk through a piece of this here so already installed into our sandbox environment here we have a we already have a Kubernetes cluster, we have a glue platform components installed. We've also installed an application here and this application is the pet store application which is a you know sample app that's out there on the internet will take a look at that in just a second. And so what we're going to do is we're going to. We're going to use the glue GraphQL technology to discover the interfaces that have already been published via open API from this pet store from this pet store app. We're going to take that we're going to build some routing rules that will take this discovered API and route request to it both requests that respond to the the old, the older rest interface the original rest interface, as well as request that will will hit the new GraphQL interface and those can live perfectly well, side by side, and we'll show how all of this works in a development context alright so let's get started here. First of all, everything that we do with solo products in the glue platform is based on a Kubernetes custom resources as the as the API so we think this is just a this is a critical design design center for us so that it's easier to manage these resources that you're going to be building with their routing rules or WAF policies or open policy agent, you know authorization policies, or specification that represents to a graph that represents a GraphQL API. We specify all that in Kubernetes custom resources so that it'll be easier to manage in your favorite CI CD platform or get off platform of choice whether that's Argo or flux or any of the any of the popular choices out there. So there is this the CRD called GraphQL API and I just want to show you starting out here there's nothing up my sleeve we haven't pre created anything here all we've done is we've installed there you know there's a Kubernetes cluster there's the glue platform, and there's a pet store application that's been deployed into the environment just those three things so if we take a look at this, you can see there is no GraphQL API abstractions that are active in our, in our environment so what we're going to do first is we're going to explore this discovery capability that I mentioned in the presentation so what we're going to do that all we're going to do is we're going to, you can do this on a larger scale basis but in this case, we're simply going to enable, we're going to enable this discovery, just for the pet store application. Hang on let me fix that here. So we have now labeled our pet store application to indicate that we want this discovery process to go forward and so what's going to happen at that point is the gateway component is going to basically inspect the open API schema that's associated with this app and use that to build a corresponding GraphQL schema so here we go. If you're interested in the, the application that we are operating on this is the this is the swagger pet store so you've likely seen interfaces like this before this is just our execution sandbox here that we could use to you know we could go in and find right you know find elements right invoke these rest requests and so forth. Perhaps most interestingly for this discussion, you can also see here, there is a link to the open API spec that actually defines what the API looks like and and consequently what this what this interface looks like so we definitely won't review all of this in detail because it's a pretty complex, it's a pretty complex specification, but suffice it to say it has definition for all of the types request and response that we're going to be using here. And it also has descriptions of the various operations that are part of this of the swagger pet store. Okay, so that's our. So that's our open API spec that we are talking to here. And so now, if we go back to our interface here, you can see we've labeled this, this service for discovery. Now if we go back and take a look at GraphQL APIs, you can see now there is there is one that's actually there that's been discovered and populated into our environment as this Kubernetes custom resource. So, if we want to take a look at that again it's it's too complex to really go into it in detail, but if we take a look at this discovered API component, you can see some common things if you're familiar with GraphQL, you'll recognize things like you know, here's the schema definition that has things like a pet which is obviously an abstraction in our pet store, what that pet store looks sorry what a pet looks like, you can see here's a user definition, we can have of course relationships among these various components. And you can also see, for example, some of the operations that were that were derived from this open API interface spec. So for example, here's one of the ones we looked at before, find pets by status and you can see how that's going to operate it's going to return a pet or a list of pets, potentially, and it's going to delegate to this find pets by status end point that lives on our application API. Okay. So, at this point, we have we have the application installed we have basically flipped the switch to turn on discovery in that environment, and we have then been able to use Google platform to interrogate that spec, generate a schema for us that maps to that spec, and then we have the environment and at this point, we can actually use this to route traffic and to make GraphQL request against an application or an application service that has absolutely no knowledge of GraphQL, and that's pretty powerful thing. So, to do that, we're going to build something here called a called a virtual service a virtual service is simply a set of routing rules. I want to focus your attention on these, these two routing rules right here. The first one is our GraphQL routing rule and you can see it depends on this GraphQL API that we just discovered a few moments ago, and it looks for it looks for requests that match this particular request prefix here that's fine. And so that's going to handle the GraphQL side of our of our routing from the from the gateway proxy. And we also have created another route here just for fun that handles the base pet store rest routing so we don't necessarily you know as you as you go with this approach you're not necessarily throwing out the old rest interface you can continue to leverage that. Now you simply have this new layer of GraphQL that's available on top of the original rest interface so we want to show you that as well that we can actually route to both the underlying rest interface, as well as this new discovered GraphQL capability. Okay, so to do that let's let's actually start off with the rest interface. So we'll issue a will issue a curl command here, and that curl command is simply going to look for pet number 10 and why must I didn't actually apply. I don't think I applied my virtual service here which is why that's not working. Now we've created a virtual service that actually leverages these components so this should work a lot better now. Alright, what's going on here, here we go. So let's issue this query, once again, and now you can see it's working so we're looking for pet number 10 we get back a JSON blob from the rest service again no GraphQL in the mix here that says okay, this this is a rabbit pet number 10 its name is rabbit one very related with our naming conventions, and you know here's a collection of tags associated with it and its status is available alright so that's all that's all good. And, you know, conversely, if we if we hit an endpoint that that does not have a pet associated with it will get back a 404 message as you would expect again this is just standard, you know standard HTTP semantics here. So, alright, now what we're going to do is let's see let's test out and see if we were able to infer and can now use a GraphQL interface from this underlying from this underlying spec. Okay, so to do that, because we're running in the instruct environment we need to get a URL to hit our gateway that's actually compatible with with instruct and we'll go through the instruct gateway. So let's let's copy that. And now let's switch over to a little GraphQL tool here that lets us kind of little query playground, we can go in here and we will paste this this URL in and we will also specify a very simple GraphQL query here. Right. So let's do that. And now if we execute this request, you can see we get back in the same in the same form of our query we get back a JSON blob that matches that form that has the name of the of the operation we're hitting, plus the name of the pet rabbit one that that we expect and so that's good. So of course we can play with this if we wanted to add additional additional components let's say we want to get the status of that pet is it available or not from our pet store. Well, then you can see right here that indeed the pet is available. So that's all good. Now let's see what happens if we do let's say we let's say we look for a pet that isn't there. Do we see a similar response to what we did from the core rest service well not surprisingly we do we get back and no response here, but we also get an explanatory error stanza that says hey, we got back a 404 from the upstream service and gives you some some explanatory text there. Okay, so, so that is all that's all good stuff there right. Now, if we had more time, we would go into some other aspects of this of this platform as well so, for example, we would look at things like, let's say we don't want to run strictly with the discovered specification we actually want to perhaps craft the API ourselves. Right. And so we go through we can go through an example where we show you how to do that and you can use a hybrid approach rail work as hybrid approach where maybe you discover initially what the API is going to look like but then you might maintain that separately let's say. So all those approaches are perfectly fine perfectly perfectly well supported. We could also look at what I think is one of the most interesting capabilities of this approach and that's the ability to to stitch schema together across multiple sub graph so in other words, you might have been surprised it has multiple application teams and you have multiple application teams and they maintain different suites of applications but you'd like to have a unified graph QL super graph that unifies them all and by simply specifying a query like this, the graph QL engine. It needs to be smart enough to go in and manage do do a query based on that stitched together schema, but dispatch requests out to whatever services are required to fulfill that particular query and so, so we could go through an example like that as well and that's a that's a pretty powerful thing. That's that's enough of a demonstration at this point. Let's go back to our slides here, and we will wrap this up. Alright, so just to just to recap, let's review some of the benefits of transforming your application network into using a declarative graph QL approach. Number one at the top of our list is increased developer efficiency, particularly in designing and reasoning about service to service request flow right now we can be concerned strictly about the data we care about, not about what services we actually need to interrogate in order to retrieve that data and then splice it together so. That's number one. Number two by merging the proxy and graph QL server together, as well as including graph QL in your sidecar base mesh from the outset, there are fewer moving server parts to manage and thus fewer failure points for your system. Three, putting graph QL directly into your gateway or sidecar proxy reduces request latency because there is no network hot needed between the proxy and your graph QL server rights is more efficient on the data path. Number four, another huge advantage of declarative configuration is that it changes. It changes the changes to your graph QL APIs are now much more easy to track so with the getoffs approach it's as simple as looking at your get commit history, or your graph QL resources to understand exactly what policy changes have been applied and when they and when they were applied and who applied them. The graph QL strongly type and it exposes its schema via reflection, so this allows your developers to browse schemas and create queries for the data they want, using some pretty incredible first class tools such as graphical playground that we showed a bit before. Finally, resolvers are simple to declaratively define and to reason about so our platform concerns can now be handled exactly where they should be in a gateway proxy layer and our resolvers are purely focused on fetching data for schema field and type relationships. Alright, so I want to thank you very much for your time today I hope the session has been helpful to you so far. If you would like more information on any of the open source or commercial technologies we discussed today. You can visit our website at solo dot io or our free training website pictured here at academy dot solo dot io. So, with that, I'd like to turn it back over to Candice and let's check on the chat stream see if we have any questions. Thank you so much Jim and yeah if anyone has any questions feel free to drop them in the q amp a box we still have some extra time, and don't be happy to answer any of your questions. I just dropped some links in the chat for you all to check out more information about graph ql and solo dot io. Go ahead Candice that looks like your question there. Yes, so there will be a recording of this it will go up on the Linux foundations YouTube page later today. Well if we don't have any other questions. I'm sure Jim would be happy to have you reach out to him if you have any questions afterwards. Absolutely you can always reach me at Jim at solo dot io we also maintain a community slack channel. You can find me there as well at, I believe it's slack dot solo dot io. So yep, look forward to continuing the conversation. Awesome thank you so much Jim for your time today and thank you everyone for joining us. As a reminder this recording will be on the Linux foundations YouTube page later today. We hope you join us for future webinars and have a wonderful day. Thanks everyone. Have a great day.