 All right, hi everybody and thank you for joining today's webinar on GraphQL. I will be, I'll be honest, I'm a little out of my depth today because I have no demos. I only have slides. So I hope that this actually somehow inspires a thought process and it is about a pattern and a strategy to cloud native that I think actually is really applicable to a lot of people that today. And so while I won't be able to be showing off what I love doing and that is software demos and making things show up on the page will be definitely looking at a lot of interesting topics. So if you would like definitely fill out those Q&A questions, I'll be having a look at them while I'm doing the presentation as well as at the end we can go over any questions you might have because there's going to be a few pieces here that are going to be new to some people and we'll do some learning together. So with that being said, before I go ahead and introduce myself a bit more, I want to just kind of give you the high level summary of what we're actually doing. And that's basically that we're talking about the typical pattern that you've all seen. And that is strategizing your data access at the client level, we're talking about a high level approach of going from from this slide or this typical presentation or this typical architectural pattern, moving to something a bit more like this. And the strategy here is that what we want to talk about high level is how do we unify how do we orchestrate how do we get all of the different pieces that we have distributed especially in a cloud native environment. How do we get all of those in a way that's easy to work with and we're going to look at sort of the challenges we're going to look at a lot of the patterns that have risen over the years. And, and that being said, I want to really just kind of you know that this is sort of the big picture we're going to talk about so that you're able to follow along with the top level thought, even if I get a little bit distracted. So that being said, here I am I'm Jesse Martin from the historic company like I was mentioned. And again, I'm not a typical webinar person I'm usually used to demos but I'm a senior develop radical these days a historic technical product marketing is is in my history. But I'm also a former product and branding guys so I've worked basically every role in any tech startup you could imagine. I have worked day laborer jobs all the way up to CTO positions, so I've got familiarity with the entire stack. And I hope that will bring a little bit of clarity into what it is that I'm referring to, and where some of these patterns that ever meets the road in terms of implementation. In the last number of years I've been a GraphQL educator, very near the beginning from GraphQL inception, and really helping people kind of learn how to use it and also more importantly, seeing the trends and the movement that we've seen with GraphQL adoption, moving away from just a typical front end clients in react lands really dominating a number of different places across the stack. And that's sort of what we're going to be looking at at a high level throughout today's presentation. So who is Hasura and basically basically it's a company that allows you to get automated GraphQL from your database was sort of the original picture and the original vision. You'd be able to give it at the very beginning you'd give it a postgres database and Hasura would look at it and say, Hey, there's a schema there, we're able to read schemas and we're going to be able to generate that into a GraphQL schema, and you'd get automatic APIs from your data. Well, that use case has definitely exploded as more and more people have requested more things of GraphQL as a service. And so Hasura these days is a data layer tool that allows you to orchestrate all kinds of different pieces. So REST APIs, other GraphQL APIs, putting everything into a unified place beyond postgres. You can add in BigQuery, you can add in Microsoft SQL server and get a GraphQL piece. That's it from the vendor pitch. I don't want to go into that too much. You'll see the logo pop up a lot throughout these slides. That's because that's just the branding that we have. But basically when you see the Hasura logo think centralized GraphQL data layer and that is the service that is performing. Whether you use a different tool for that or not is totally up to you. Hasura is a great fast way to do that. But that's the, that's what I want you to think when you see that logo along with the GraphQL logo is this is where you centralized data access and so it could be Hasura, it could be something else, but I'll just definitely speak to it from the Hasura side. So before we get any further, we need to kind of come to terms about our terms. And when the slides are talking about data access patterns for cloud native, these are lovely enterprise buzzwords that we need to actually define a bit better so that people understand what do we actually mean with these different pieces or more specifically what do I mean when I talk about these in this presentation and that'll hopefully give you a bit of a framework that you can actually kind of use to structure your thought and be able to understand where it is that I'm trying to lead the conversation and you'll be able to hopefully generate your questions based off of that. So let's go ahead and start off with cloud native and cloud native really ultimately is building software designed to take advantage of distributed computing and storage. Architecture that leverages many small workers operating in parallel opposed to a centralized service working in waterfall. Okay, let's break that down as well. We ultimately have as as computers went to the cloud and everybody was working on building web servers for those of us that are old enough to remember that you would have this this mode where you would sort of run full tilt on all your hardware up until the point that hardware began to, you know, crack and grown and be able to say, oh, we can't handle the workload anymore. And then you would make a non small jump to the next level of hardware to be able to support that. And you would keep doing that in a sort of logarithmic curve that would say, okay, we're going to double down on the size of hardware and we're going to continue to grow bigger and bigger machine to be able to handle these workloads. Well, through, you know, development of computation and hardware prices becoming more and more commoditized, as well as, frankly, workloads of a scale that just could not fit on a machine anymore. So let's quickly move to a different system and that is cloud computing and cloud computing takes this paradigm of no longer do we actually throw a whole bunch of workloads at one machine or maybe a cluster of big machines. We throw these workloads at little tiny handlers that actually process these workloads in sort of a discrete manner we're able to say, I'm going to give this payload to this function running on some machine somewhere. I'm just going to go ahead and calculate just that small piece of workload, and I can scale up and scale down how many of these little workers that I need. That's the ultimate picture of cloud computing is that no longer do I have this sort of logarithmic, you know, scale in my pricing where I can, I try to keep things under a certain price point. I can very linearly say, Okay, I need to have 20 times more workers available for this workload coming in, versus I can scale down tomorrow to maybe one or two, depending on if it's black Friday or not or whatever else might be causing a spike in my workload. So the workers are able to behave in a very sort of puristic mathematical functional way where they could either be pure or impure and that is namely that they could actually take a discrete input and give a discrete output, or they themselves could actually cause side effects and be able to create more events events occurring from them as well. So if you think about the data journey, we would oftentimes have an entry point where a user might submit a form, and that form shoots a data load over to a database somewhere. And that database is now going to go ahead and trigger off a workload to some random little function that's living somewhere else, and that's going to actually be able to process that data maybe it's going to do some data parsing. And that payload over to another little function running somewhere else on an entirely different server, and you'll see this sort of bouncing around the cloud of these of these services that are all working rapidly in parallel, and be able to then return that data ultimately back to the user or into our system and however we need to consume that will see that that bouncing around behavior is sort of what has actually been the leading, the leading problem trying to be solved by what's now referred to as edge computing. And that being able to put discrete workloads inside of these tiny little operators or little workers has led to the rise of edge computing which says, you know what if I actually have a user, maybe somewhere in Tokyo. I try to potentially run most of my little functions and behaviors in Tokyo on a on a server that I have near there, as opposed to trying to have him go ahead and or them go ahead and have their, their data process somewhere in Brazil or somewhere in San that's just completely a wasted round trip and timeline when I actually have resources distributed globally. And this is what we've seen as the rise of cloud computing, the edge computing and so edge computing and this whole approach to micro processor micro workloads is only something that edge computing has been able to give us so it's been a lot of a real benefit to see that as a trend in the way we build software these days, and so cloud native is thinking about this approach to many small workers distributed all over the place to be able to process these workloads. It's a very critical concept to make sure you have concretely in your mind is that we're talking about a whole bunch of little workers and not one big machine that's now needing to scale up to the next big machine. And you'll see what's even happening now with this approach is that databases themselves now are actually becoming distributed and be and creating smaller and smaller discrete database units that now are actually able to be orchestrated and have read masters and clones and replicas where we're able to say, you know, one entry point or multiple entry points and then there's replication across the status sets so the idea of cloud computing is definitely one that is here to stay, and one that's really important for us to understand about this is the shift that computing is going in that we're not going to be going away from anytime soon. Before we get too much into the into you know the problems that have arisen with that let's go ahead and define our other term and that is going to be about data access. And if you can imagine now we have a whole bunch of these tiny little workloads that are occurring all over the all over the world, we actually need to be able to access the data from these workers. So the use case I want to kind of give you as imagine you've got a user in Tokyo, who is trying to maybe see a prediction of their fitness level if they were to continue healthy eating or if they were to continue doing a walk every day. Well this is a workload that requires a number of components to be used. It requires accessing potentially my big query data, and my models that I have loaded in there and trained on my big query data set requires access to the users data and it requires compute to actually process those out into a customized, you know, response that my user in Tokyo is going to have. Well, I don't necessarily have the ability to just easily grab the data from everywhere without having a really discreet approach and the, this has led to a rise of this sort of data connection glue jungle that we're currently in where every company might say hey this is how you access our data and we're going to do some magic be under, you know, inside of the box that gives you access to our tool, or an API where it says okay we've defined our own set of API primitives that allow you to make these restful requests to your data, and hopefully they're adhering to a well known pattern that you're able to leverage. And then you have in another case you have ORM driven development where somebody says well I'm going to actually just make you define your entire data ecosystem in my tool, and then I'll give you sort of these connections through more of an STK kind of approach at the end as well. And these are very common toolings that we actually see arise as people try to answer the data access problem. But the problem is that we experienced with our user in Tokyo is that not one tool is really able to handle all of those pieces. So we have the we need to touch an API we need to touch an SDK we need to touch an ORM, all to be able to solve this one simple problem. And if I've got this cloud, these, you know, individual workers distributed globally across the world. I need to have ways to access that data. And so we see this rise of really a very, you know, front loaded workload for our developers that have to actually figure out how do we go about, you know, learning how to connect to all these different tools. And the problem is we're shifting the expertise to, you know, experts in writing glue code, as opposed to experts in writing domain logic and understanding the actual problem or trying to solve. And so what we're trying to talk about today is really about this, this approach of saying hey, there's got to be a better way. I don't want to spend all of my time having to figure out how to work with an ORM or all my time having to figure out these different connection tools and writing all these different glue pieces together. I want to just be able to actually solve the workload I want to be able to get data from this place and this place in that place and as minimal overhead as possible both cognitively for my developers, as well as then for the machine on round trips to be able to execute that payload locally and distributed back to my user ideally in some sort of an edge location geographically near my end user. And that's sort of the problem that we've seen arise. And what we're trying to or how we're trying to then solve that or let's say before we're trying to solve it let's talk about the actual, you know, in case here so while there is a definitely a problem that appears here where we see this, the rise of these micro services are having their different ways of communicating to each other. There still are a lot of benefits to this pattern so let's kind of summarize what I've sort of dumped on you all here rather rapidly. And if I need to slow down speaking maybe I should try to wind wind down my enthusiasm here a little bit. But let's kind of summarize what what the benefits and challenges are of this cloud native approach them. The benefits, you know different systems need access to, to share data is a is a benefit because they're actually able to okay we actually have all this data distributed across different systems so we have it available it's highly redundant. There's clear cost outcomes invisibility so we're able to say that when we're doing cloud computing we can actually say I can relatively easily predict what my cost will be as a scale of what of my user workload. So I can actually just you know add 123 other new workers to my, my bill and be able to handle the rise and workload redundancy is built in so if one tool goes down or one of my replica my replicas goes down for one of my databases I actually can relatively quickly be able to either spin up a new one, or I've already got that data replicated in a number of locations geographically as well. And then I have a wide selection of vendors so I don't have to be limited to a specific compute mechanism or a specific, you know run time that I have to do for everything. I have a free range of both programming languages, as well as places to execute this there's an explosion and cloud providers all around the globe that are allowing us to be able to both run kind of bespoke workloads ones that are maybe hyper GDP are ones that are potentially more resilient for long term storage there's all kinds of different providers that are solving unique edge cases for what we're trying to solve. And it's just as all those pieces together it's a very felt tolerant pattern by design I don't have to worry about running replicas of a giant machine I can actually run replicas of small little workers that are much cheaper and more maintainable and observable on what what my costs are. Um, so those are the benefits we talked about but the challenges them is again so there's no standardization and API's are SDKs. So yes there is an open API spec in a way that we can kind of be able to introspect there's there's a rise in things like RPC and they're that allow us to be able to sort of request customized payloads or you know the rise of TypeScript itself and type back ends are allowing us to sort of have some SDK level or ID level so the integrated development experience inside of my code editor. I'm able to sort of introspect what data payloads might be available through through things like type languages, but there's no standardization, everything is just sort of. We're hoping that everybody's kind of gradually migrating to a best practice at the same time but there's no real one true source of truth when it comes to what the state of my data is what's even available to me as a developer. And so this event orchestration becomes a massive challenge as well so I have all these different workers that are all happening and firing off different workloads at different times. This has led to a huge rise in queue management software like SQS or other different queue providers. That's basically our Kafka and things like that that basically say hey how do we, how do we actually manage these distributed events that are firing off all around the world potentially that needs to happen in parallel or or need to happen in serial so that I don't have mismatched workloads so there's a big problem when it comes to this event orchestration of all this data is moving around the system. And then the last part is so you've got all these different pieces that are all needing to access data in different ways. And then you have the question of well, and then how do I handle authentication and authorization between all these services right so not only now do I have a whole bunch of these different computers like. Imagine if you just had 100 people with laptops just scattered all across the world, and they were all sitting there waiting to execute on a workload that you sent them in an email. They're, they all have their own individual, like credentials back to your databases or other services. And if one of those end locations gets compromised, you've got just a huge surface area that's a potential for attacks, you have potentially insecure networks, you've got a real, you know, nightmare when it comes to talking about authentication authorization as well because each tool needs to maintain knowledge of BigQuery and S3 and all the other different tools that you're using for that that health work use case from our user in Tokyo. So that's every password on every machine at all times is not something that as a security expert you really want to unnecessarily have to be managing. So that's kind of the kind of the benefits and also the challenges of crowd computing. So how do we actually try to solve for this use case and obviously I'm going to I'm going to pitch you on GraphQL now. But like how try to think about this use case a whole bunch of distributed computing that's great it's scalable it's redundant it's fault tolerant. I can understand the pricing by throwing 30 new workers on my Tokyo box to be able to say handles a spike and workload there it's great for that those benefits. But now how do I handle this issue with security how do I handle this issue where my developers were spending all their time having to learn to write to x new platform and x new service that we're trying to integrate with the tool that we want to bring in now means that I need to hire an expert in that specific language or a way to be able to bring that utility into the into my ecosystem. So how do we how do we solve that. And that's where I kind of want to talk about what graph how GraphQL solves it and the way that we're going to be able to talk about how GraphQL actually solves this data pattern. What we need to talk about a little bit what GraphQL is, and then actually think about this is the part where I get a lot of flack usually is that GraphQL really is evolving beyond just the front end. And so let's talk about GraphQL is we'll see sort of what it's what it's use case has been, and then we'll talk about the, you know, sort of based on how it works, why it's now moving to become more ubiquitous data layer language and framework for, for these environments. So let's go ahead and have a look at GraphQL. So for those of you that are probably, I'm guessing not everybody here is totally new to GraphQL anymore. That was a, that was a phase when everybody was new to GraphQL and trying to teach that and another tool. So GraphQL really high level is sort of ultimately three different pieces. There is a query language, there is a execution environment and there's a specification working in a reverse order on that. So the specification basically means here's what GraphQL is. Here are the truths that it needs to adhere to if you want to say you have a GraphQL specification compliant service. And here are the sort of features and functionalities that it can do. And that specification then becomes really, really critical because if you have a GraphQL compliant tool, then everything out there is actually able to sort of know what everybody else supports. And a really big piece of that is actually what's called introspection. Now you can turn this off and most services and has there is no different in that regard but introspection is a special query that actually allows you to sort of send one big request to your server and tell me what I have access to from your data ecosystem. And this is what's led to just fantastic tooling from a developer experience side, where whether I'm in my coding environment, or I'm in some sort of an external service, even in a low code environment. I'm able to plug in a GraphQL API, and suddenly those tools are able to actually tell me what data is there. Instead of a traditional rest environment, I get to sort of guess, or hope they have a, a documentation server running where I can sort of say, hey, please tell me what information you have available. Kind of for free and out of the box if it's a GraphQL compliant server, I'm able to just send this request and now suddenly I see all the data that's available. And this is a really, really critical piece to sort of the why GraphQL has become a great developer experience, but more critically than that because it is specification compliant. It means that server to server communication, one service is able to know everything about another service. And that's a really, really critical piece to the story about why GraphQL is actually grown so so quickly and it's been such a great tool for the use case that we're trying to solve today. It's not specification, it's the runtime environment, but it's more, it's not actually having any language requirements, it just tells you what it should do. And so when it's a execution environment as a, as a methodology and not actually or as an ideology not as an as a language requirement. You're able to write GraphQL servers in so many languages. So we have, we have GraphQL for Ruby with GraphQL for go for node for rust for so many different language languages out there that no matter what tool you're working with, there's a good chance there's a GraphQL implementation for that already in existence, or you can simply create one yourself because it's a, the specification is clear on what they should do, and you're able to write these data layer connectors for how to work with that. And so GraphQL as the resolver side essentially says, we take in this, this query, and we're going to go ahead and map that to these things are called resolvers and that we those resolvers will go out and fetch the data, make joins across edges of data and then we turn that back to us. So with that query side we actually now have a specification you can see an example of that being written there with the my cash query. You have this GraphQL query now that actually is able to have relational edges between your own data sources, without necessarily having to make right joins or any other statements by hand, your resolvers inside of your, your execution environment do that for you. And so that's a really powerful piece about how GraphQL works. So when you put all these pieces together, you've got a language agnostic specification that's able to execute in nearly any runtime, and has an extremely understandable and readable query language that we're able to now use as sort of our glue for all these services. Well throw this into a tool kind of like Hasura now or into any other you know API tool and our GraphQL tool and you're able to now actually say join something like BigQuery with S3 with Postgres with a REST API to strike and you can connect all of these relationally inside of the GraphQL resolvers. And now you're actually able to use one request one query to go and look at all the data that you will need to execute on that payload inside of these workers. So this is the this is sort of the big reason why GraphQL has been so powerful, and why GraphQL as a primitive for for cloud environments becomes really important because it allows you to go ahead and fetch all this data. So with one request with one authentication source of truth, I can just say well I have access to these fields and my my GraphQL center, you know, server is able to go ahead and federate out all the permissions and access controls from one location. And then it sends me back all that data that I can then process inside of my workload. Instead of me having to request all the different permissions all the different credentials to a whole bunch of different services. I make one request for all the data connected the way that I want to connect it as well across the different edges that are important to my data ecosystem, and I get that data back all through a single API. So I think it's sort of the traditional approach them. So GraphQL as a service really sort of arose as a, as a front end then specific language to understand a little bit more about why, why this next part I'm going to say is so controversial. So the GraphQL resolver and data sources piece of it is important to to understand but it was designed to solve this data fetching problem at Facebook, because they were having mobile development, and they wanted to just simply customize the payload that they were getting in mobile environments because you were having friends and friends profiles and friends profiles friends profiles and you were having just massive massive amounts of data. And they were saying for most of the views I just don't need most of this data, I want to be able to go ahead and have a terse view of the data that I'm needing. So GraphQL was born as a way to just descriptively say hey I only need a couple of these columns for my database in this view and maybe in that view over there I need a couple of columns. It grew as sort of this idea of hey we can just really simply from a really rather simple data set be able to fetch this data. But as people began to think about what can GraphQL do and how can this sort of approach of getting back specific pieces of data that my server is able to give me a typed guarantee will will exist. So we started to see an explosion of connection of where GraphQL could live and what it could actually connect to each other. And so now no longer is it really been used simply as a front end tool to one data source to be able to grab a couple of fields. It's used as a way to write relational queries across REST APIs, other GraphQL APIs, databases, and orchestrate all of that data workload into a single easy to write query language. And that's where GraphQL has now has now developed so that the pattern, the old pattern was where you have this sort of approach okay GraphQL lives on my front end, and then I do all my, you know, my big kid worked on my back end. Well now we actually have this new pattern where we say, actually, my front end will just interact with my GraphQL server. And that server is where I orchestrate everything my server is where I actually define the edges between BigQuery and Postgres, or BigQuery and my Microsoft SQL server, or BigQuery and my weather API or BigQuery and my user health predictions. And that's where I do all of my orchestration where I go in there and put all of my credentials on my access patterns, everything inside of there. And now I'm able to distribute this single API to my front end that is able to now say yes, read, write, update, delete everything through this one connection to my server. And that is the entry point to my entire data ecosystem. And the entry points in this case, it's no longer just about a single front end. It's now about that actually allows me to be able to go ahead and say my front end could also just as well be another worker, or could it be another service anywhere else. It's not just a front end game anymore it could totally be this server is able to go out request and fetch data from all over and be to this single API that any of my developers now know how to write. In respect to be able to see what data is available based on the context of permissions they have access to, and they're able to be able to get all the data they need in a very minimal payload as well as a minimal amount of round trips back to my server. And this is sort of why it's become as this explosion to solving this pain point. So if we apply GraphQL to the data access challenge problem, where we had the jungle of API is going everywhere, trying to access each other. We can now see that actually what we have is sort of this centralized GraphQL access point that is saying well this database needs to talk to this client or this client needs to talk to this rest API. We can see this more circular pattern where everything is able to communicate through this ran central station of my data which is my GraphQL server in my, my engine. And that's the really critical piece of how we actually are able to solve this data access problems we're able to say access controls, go into a centralized location, my different data sources go into a centralized location. All those pieces I've defined the relationships and the edges between them. And when I have all those pieces together, then I'm able to expose this through an introspective query that my developers, whether they are working on my warehousing or they're working on my, my logistics or they're working on my, my, my storefront or whatever the patterns and tools are that they're working with. They know how to read and introspect GraphQL and say what kind of data do I have access to here. And then I can make that query, and I don't have to think about, well how do I actually access BigQuery data again or how do I actually what's the rest end points for this thing. Where do we have all of our internal documentation stacked up to understand these access patterns. They just read a GraphQL query, get the data they need. Your DevOps has done all of the glue in the back end. And if you're using a tool like Hasura and there are other tools out there that are solving this problem as well but if you're using a tool like Hasura, it's like, simple to define these edges and relationships. Then you're able to say hey we've got all this data, and it's aware of each other connected to each other. And now all of my developers are able to move quickly. I think of very, very specifically one use case for a customer that started off by having an emergency to stop out data for a logistics platform. And then they quickly realized that the Hasura pattern actually was so powerful, they swapped out warehousing they stopped out their e-commerce front end, they swapped out all these different pieces as modular units that actually then allowed relations across them between all queries. And that actually now not only did it make their business a lot leaner, it was able to allow them to reuse their developer units. And so a developer who understood how to write GraphQL and build out a logistics platform could just take all that data access knowledge over to warehousing or over to the e-commerce team and they were able to move developers around a lot more modularly, which is a very powerful piece. So coming up to the end of the last slide here I see the first question so I'll do the last slide here and then we'll take the questions. So if you look at sort of the data ecosystem, according to Hasura, basically you've got this idea that Hasura is the centralized access layer. You've got API generation where we say, okay, give us your data sources, we'll read your schemas and we'll go ahead and generate those data access mutations and queries for you and real time data and everything. Custom business logic, you can add in these extra workers as well so you can say, hey, this worker needs to live inside of Hasura so I can say I want to access this data or I need to process this data in some special way that's unique to my business. So authorization and everything built in but the idea here is that everything lives inside of one access point and this is how GraphQL is solving that data access problem is that you're able to then federate out all your services, connect everything into one location and be able to then distribute that single API that's able to speak to hundreds of these different workers, hundreds of these different compute resources and databases you have distributed all across the globe. Through a single location that you could also federate but like he is a single location that has a single access pattern for querying and be able to return that data. And that's yeah, that's how GraphQL is solving the data access pattern for cloud compute. So Hasura is a great tool I'm going to just give it up on this little pitch is a great place to have a start we're going to drop links for you on you know, try our free tier or open source as soon as open source, which is why I'm allowed to very free conscience. Pitch it and say, I mean if you want the easy way is our cloud version because it's really simple me do all the, all the hosting for you, and has a free tier if you want to do it yourself just take our, our open source product and run it yourself. So it's a great way to get started with GraphQL and all this orchestration everything that I talked about today is totally doable not ecosystem with built in caching built in access controls. One of the one of the stones people throw out the GraphQL bus is like well great. Now I get to just write a bunch of resolvers by hand. That's one of the pieces that has served specifically tries to solve as we say in a lot of cases we can actually introspect your source, whether it be a database schema or a rest API and be able to know what data is actually available there and give you GraphQL typeings for that so it's part of the part of the sort of pitch but again open source feel free to take it and if you want to just try it out. I highly recommend the just trying out a free tier on cloud so that's it for my slides we have a question so comparison to data virtualization products. I think I'll need a little more context around what you mean by the data virtualization products, just so I can answer that a bit more clearly so if you are able to provide a little more content bill is added to stage. Alright, I don't hear you just yet sounds like you were added to stage but I don't hear you. I think you're muted. Give it a minute to see if you can connect. So I think we're probably not going to get them on for the audio. So to be go, and I'm not a to be go user, I'm probably even saying the name wrong. So today is another tool for creating virtual data access patterns across multiple data sources. It's probably a very similar approach in this case of what you would be able to be to leverage what we're talking about is centralizing data access through through like a unified place and being able to go ahead and then distribute that unified access point across cloud compute resources. So where I would probably say what's different is that the rise of graph QL in terms of things like its introspection and it being a free spec when it's actually controlled by the Linux foundation is that it's the rising developer tooling around it is so good. Anybody is able to get up and started through any of the free resources that maybe the guild is putting out or any other developer ecosystem for anybody to have access to it and be able to get up and running graph QL has a strong grass roots level approach that just has gotten a ton of adoption across many, many enterprises and governments because it's just really easy to work with. I can't speak to the DX for to be go. But I can say that this the pattern is going to be very, very similar and be interesting webinars talk about maybe comparing something like to be go to to graph QL specifically but yeah that's probably the best way that I will answer that question. I would just say there's something to be said for GraphQL dominance and a lot of the everything from startups to scale ups to enterprise, mostly because of the DX is just really, really good. I hope that answers the question. Oh, it's like you unmuted. Yeah, yeah, the. Actually, it's typical but there's a right. It sounds like the same idea. They have right the idea is that you leave the data where it is right instead of moving data right you can leave data where it is and access it through a common plane. So that's that's kind of what you're doing here right. Very much so what do you know what the actual query language or way to create a data is with to be go. So they support pretty much any like they support a lot right so they support the, you know, all kinds of clients. You know, you know, the standard like jdbc odbc rest Jason right so that which is really kind of what they were trying to do is that you could get at the data using whatever you know client you wanted. And it would translate so so even you could actually access rest back and through, you know jdbc front ends and vice versa. I think we're kind of an SDK approach then so it's creating sort of a connector kit to the data or what's the approach that it takes then. Yeah, it's kind of a calling. I hate to call the gateway because it's more of a gateway but yeah, but it seems like there's a right. You have kind of a back end right you have all your back ends and then you get mapped to those and tie those up to your front end and then your front end is provide to you know a which is inspectable but it is typical didn't think typical built that I think they bought it was a was a Cisco or Citrix or somebody like our HP product at one point. Not a bad product. I've gotten some training on it. I never actually implemented one. So I was just trying to put the pattern in place to me is all the interface to graph QL is that all rest Jason or is there multiple client interfaces. So the the access point to Hasura itself. And I don't want to get too vended on on the call because it's more about graph QL specifically. The the access point to Hasura is primarily graph QL, though we do have, we do have the ability to do what we call restifying queries. So you could take a query or a mutation and be able to create a create a rest endpoint for that and with parameters and everything. But we so we try to keep pretty strict with modern web tech in terms of what's actually an access point so we're not trying to implement some new vendor logic. I'm not sure what what tip codes and communication to the client is it looks more like a looks more like some sort of a SDK level integration for front ends. Whereas with with Hasura approach is where and why why we're talking about graph QL is we just said graph QL is as a language, lean enough and clean enough to be able to be expressly handling almost every approach. One of the things that it supports, you know, for example is directives that allow you to create special behaviors around how you would want to format data or what kind of like the resolve pattern that should actually be. Maybe this this piece of data should come a little bit later resolve this data first and that other and this a little bit later on. So maturity wise obviously there's you know it's still growing and it's still maturing and there's a lot of developments happening around it. But it's just the one that has Sarah has chosen to use because it, it really provided that strong type safety from the back and the front end and have to look into typical bit more it looks really interesting. I'd be curious if it actually provides. It's a little, it's a little more, you know, legacy on crime. Then, then this, you know, cloud native. The older product. But it was yeah. That's where her sir is trying to sort of fit the bill graph QL is specific is trying to fit the bill is like the velocity that new tools new services are coming out I mean I saw somebody doing a graph QL on embedded devices a little while ago it's like. Okay, like, that's, that's cool. So there's a lot of just cases where if you create an open tech standard, and you embrace that as the way for accessing it really does enable velocity for developer experience that's that's hard to compete with, and a lot of a lot of these companies are starting to realize that, I mean, I've had so many customers specific like people in the healthcare industry that you know they're where sparkle is sort of the reigning champion when it comes to, to graph like data right. So many of them migrate to, to graph QL because they said well, all the data sources that we're getting from people are always relational because it's the easiest way to capture format and transport data, because it's cheap portable and ubiquitous. We need to be able to access this data in a graph like way. So, graph QL as a way to sort of give you graphs. On top of relational data actually was the 90% use case that most of these companies needed and they were like, we, we know how to write sparkle but we have to spend all of our time trying to do etl workloads from relational data into a sparkle format. And it's great like sparkles super powerful and there's a lot of problems that you can't solve without it, but like for a lot of cases like well graph QL is just really really easy to work with and so a lot of people have kind of adopted that pattern of relational plus graph QL gives you everything you needed. Thanks for the question. Yeah, any other questions that was given us looking to deep call, but for sure that they've spent a lot of time solving this, this pattern to so there's always going to be chances to inspire. Any other questions we have. We'll give it a give it a five count. Three. Two. One. All right, and that being said we're two minutes from the end so I'm going to throw it back over to the Linux Foundation. And thanks for having me today. Thank you so much Jesse for your time today and thank you everyone for joining us. As a reminder this recording will be on the Linux Foundation's YouTube page later today. We hope you join us for future webinars have a wonderful day.