 So, this is a deep dive about the infer costs of optimizations that we've been doing. Our goals are to reduce infrastructure span across all of protocol labs, clarify the cost per team by deprecating shared accounts and establishing a regular cost reporting, ensure maximum efficiency through rate sizing and removing unused infra, establishing cost baselines and providing best practices and tooling to keep costs low moving forward. We established a working group that meets weekly and they've been working very hard for months now to cut costs. On the AWS side, we've cut costs by 60%, on Equinex by 22%. Some of the highlights, we've been deprecating a really old account called the Filecoin Staging Account. It's existed for over five years and was kind of a catch-all for everything to do with Filecoin. I'm not sure if anyone even knows what Go! Filecoin is, but we found a couple of nodes that we're still running from there, so that's pretty old school. So we established a migration plan for any services that are production services that are running there and then we got rid of all the known unused infra and then we enforced a tagging schema and performed a screen test that really got rid of all the orphaned and unclaimed infrastructure that had some really good results. So thanks to all that, that helped us out. On the Equinex side, the focus was on the gateways. There was actually some bandwidth optimizations that are good to share with the whole network, the migration to Kubo and to the resource manager had significant savings on our egress costs and then we moved some of the infrared to a reservation. So the last bit I wanted to share was just the adoption of Cloud Custodian. It's a Python-based tool that was really helpful for all this. It helps with reporting, enforcing tags, and we're able to schedule our entire screen test with this tool. So for any team that's looking to cut their Cloud costs, I would really suggest that tool. We now reached a baseline, I think, where the costs are clear for each team. So some of the next steps now that we have that data is to establish long-term savings plans and we're looking to establish best practices to keep our costs down on a term basis and finally to provide best practices and tooling to keep the costs low for other companies in the PLM as well. I think it's really important that we help the whole ecosystem at cost and do as much as possible to share this work that we're doing. So thanks to all those that helped out and also thanks to the working group at CORT because that's been a lot of effort and we hope to continue.