 over to Steph for our data update. Hi, just a short update regarding the work that we've been doing at data. Our biggest goal for Q3 is to make popcorn chain data fully queryable. What's that mean? Today, historical chain data is an S3 bucket. It's an S3 bucket called fill archive. And they are represented as CSVs and these CSVs are transformed into per cut files which can then be queried with a Athena query engine. However, this usually lags by one to two weeks and the current data lives on timescale DB. So this makes it really hard to query data from for example, like today to three months ago, so I have to sketch it with data that you might need for let's say like up to genesis. Obviously, we want to make it easy to do analysis not just for now to three months ago but from now all the way to genesis. So how are we going to do that? We are going to unify the historical chain data and current data into one data warehouse and we've chosen BigQuery because it has the least overhead when it comes to operations. You can read their proposal and notion if you want more background and context on why we ended up using BigQuery in the end. Progress so far, we now have a BigQuery project with the historical chain data from the S3 bucket. So we just simply ingested those CSVs into BigQuery but that won't be the longer term solution. We simply wanted to do that because we wanted to do data modeling with DBT which David has worked on. So that transformations will be easier. It's version controlled transformations also happen in the same data store that the transformed data lived in. So it will be much easier for us to model and massage the data to however we need and based on feedback as well. So we can iteratively improve the data model. As we learn more about the needs of our users. If you want to test out BigQuery chain data you can do so on Sysense or Grafana if you choose the temp underscore BQ data source. So if you look on the screenshot down here that's how you would choose it as a data source in Grafana and you can do the same for Periscope as well. Another exciting news is we have data infrastructure as code deployed and as well as our Go workflows. What this means is that we can lean into using the containers that have already been created for us by the rest of the PL network to create our data pipelines instead of having to write programming like a language bindings which is what was being done previously where we had to write language bindings in Python for Lilly. Obviously that's more code means more maintenance and for a very small team of two I think we just wanted to try and make our pipeline then work as close as being as possible and as time we check now as thick as possible. So what's next? We will be reprocessing the pipeline chain to address issues and existing CSBs. We'll also be doing more data modeling that's gonna be tomorrow at 11 a.m. with myself and David if you want to join just feel free to ping us on stack. And we will also be setting up existing production pipelines to have BigQuery as a destination. So this is also a nice side effect of moving from relationship to BigQuery is that because BigQuery is also a data warehouse you can use it to store other business relevant data as well. Because this was another set of issues that we had where people would come to me and ask, hey, why can't I do exploratory data analysis with let's say data from GitHub and try and find some correlations with software chain data. That was really difficult before and with BigQuery that with migrating all of the data into BigQuery, we hope to address this issue. We will also be adding some data validation and testing with DBT as well. Yeah, if you're interested in any of the work that we're doing reach out to us and Phil Sentinel or ping us at Tenno Data on Filecoin Stack. If you would like to learn more about how to use BigQuery with Goffano or Periscope just let us know as well and we'll try and get you bootstrap. That's it, thank you.