 Hey, guys. Thanks for coming. My name is Vincent. I'm going to talk about standardized subgraph today or how we can make it so much easier for you to build your dApps. If you ever built a dApp or have ever considered building a dApp, one of the first questions that comes to mind is probably, where do you find the data? And let me show you an example that Component Finance has been working on. So they are trying to build this predictive model for people to look at different assets, like ETH in this case, and how the yield will be in the future. And in order to do that, they need a lot of historical data on interest rate, on total deposit, from different lending protocols across different networks. And this is a small example to illustrate where you might go to find all these data today. As you can see, there are many different data sources that you need to hit depending on which lending protocol you want to pull data from. If you're lucky, there's a subgraph that you can already use that has all that data. If not, you might have to work with the JSON RPC API and transform and aggregate all of that data yourself. And for each of that data, you probably also need data adapter to normalize so that your application can use it. So the point I wanted to make here is that Web 3 data space is very fragmented. There are dozens and dozens of different data sources that focus on different parts of data or different kind of data. If you're working with raw data, it takes a lot of effort to aggregate, transform them into something that you can use. For example, TVL, revenue, those all require a lot of work. Historical data is often difficult to find. You probably need an archive node, and those are often difficult to access. This is actually a problem that Missouri has run into when we first got into the unchained space. We're a big data provider. We're trying to get a lot of data, and it was difficult for us to find those. So we decided to collaborate with the graph to work on standardizing subgraphs so that we solve this problem for everyone in the space. Specifically, the way that we standardize these subgraphs is that we would look at all of the protocols in a specific category, say lending protocols, and we pick out all of the commonalities and differences among these protocols and see and come up with a unified data model that would apply to all of these. And then we transform that into a common subgraph schema in which we build all of our subgraphs again. So for each of the lending protocols we integrate, we use that standardized schema so that you can use the same query, you can use the same data adapter to fetch all of that data. And now, instead of going to all of these different data sources to fetch that data, you just need to hit one single decentralized data source that's the graph, and you'll be able to find all of the data you need, including the historical ones. So here's an example product. Well, here's actually a product that Mazzari has built on top of all of the standardized subgraphs that we have built. As you can see here, we're showing. There's a lot more metrics that we're showing that we have integrated. It's not in this slide, but you can check it out using the URL there. So to give you a sense of where we are today on the subgraph standardization work, we have index data from 60 different protocols across 20 different networks, surfacing over 500 unique metrics across all of the different protocol types. And we have indexed over a billion data points that you can use in your DAP directly so that you don't have to do that work. And the way that we are seeing how the future will evolve is that today you're spending a lot of effort on the data plumbing on the data aggregation part and very little on the application logic. But with the subgraph that we have built, it's already done for you. You can just use that so you can focus more on your application logic as a DAP developer. So all of the work we do is open source. It's public. Here's the GitHub repo for all of our standardized subgraphs. If you're a DAP builder or data scientist, data analyst, if you work with the protocol team, we're more than happy to integrate your protocol into our standard. So feel free to reach out to me after the talk. And if you're a subgraph developer or a Rust developer that's interested in this data space, feel free to scan that QR code there that will take you to our careers page. And we are always hiring. Yeah, this is my talk. And thank you for coming. His aggregated subgraph is more, is there assumptions of trust when you use the aggregated subgraph compared to this? Are we subgraph or compound subgraph? Yeah, so all of our subgraph source code is open source. There was the GitHub repo we showed. So we describe all of the methodology of our computation for metrics in detail in the repo. And of course, you can go into the source code to look at exactly how different numbers are integrated. And the graph is also working on something called verifiable queries that will put that trust into the data that it indexes. What's the approximate delay before the data is available? I mean, if we want to do something in real time. It's in real time. Yeah, so it's block by block. OK, but there's got to be some delay. Like 15 seconds to two minutes, depending on the chain. Perfect, thanks.