 Hey, everyone. So I'm going to be showing some of the work we've done over the last couple of months around improving these snapshot service that I would say most simply use in the Valkyrie community, particularly for Mainnet. So this has been something that's been going on for a while. This has been originally spearheaded by Reba, who's done an excellent job maintaining the current snapshot solution that I personally use almost weekly for the work that I do. And now we're taking over the stewardship of this service. And we've recently launched a new version of it. Snapshots are a way to join into the Valkwing network. So through those documentation, we can read about this. But basically, snapshots are this small segment of the Valkwing chain that contains enough state information to allow nodes to participate in the consensus mechanism that Valkwing uses. Right now, the Valkwing chain is upwards of 16 terabytes in size if you were to compute it from Genesis up until now. In doing so, it roughly takes about 36 days for every year's worth of chain data that's produced. So now we're coming up on two years, right? So we're coming up on 60, 70 days of compute time if we're to reprocess that chain. So that's not something that's realistic for users who are trying to get in with the network or trying to restore from a data disaster if you lose your data store or whatnot. So chain snapshots enable users to get into the network relatively quickly. The work that I've done is just to operationalize this in a way that we can have better guarantees around the availability of the snapshots and then also put in alerting and monitoring places so we can understand how these snapshots are being produced and how well we're able to keep them being produced on a regular basis. So today we announced that these are in a soft launch phase providing snapshots for both mainnet and calibration networks. So just kind of quick overview of what this kind of looks like. Essentially what we do is we produce snapshots every two hours. We do this through a cron job that runs. We produce jobs. These jobs then go off and talk to a set of lotus nodes that we operate and then perform an export of chain data. We take that chain data and then we stream it up into S3 at the moment. That data is then made available to users who can find out the latest snapshot by visiting one of these URLs. These URLs redirect users to the actual snapshot that exists. So in this case, you can see here, if we make a request. So I'm going to move this toolbar. If we make a request to the latest calibration snapshot, we get back a redirection to the actual car pile itself. And then if we were to do a curl request here to follow the redirection and then download the attachment here, we'll actually download the car file named CarPile here. So one of the improvements we made that we had some feedback from users that the old snapshot system, it used to use this concept of the latest object that actually referenced the snapshot itself, which you can see right here. So if we do this request for this latest object, we'd actually get back the snapshot contents itself. In certain cases, if users had slow download speeds, this could actually end up in corrupted downloads because the actual latest object would change out from underneath users. So instead of directing users to having this latest represented snapshot itself, we redirect users to the actual like a static file that represents the snapshot, which then can be downloaded. Lotus automatically handles this. So if you put in these latest URLs into your Lotus nodes, Lotus will follow these redirects and download the file itself. We also support the same kind of behaviors. We have shot check sums so users can do a request for a snapshot, can also pull in the shot sum, and then verify the file integrity that can download. As I said, we provide snapshots for the calibration network now. This is kind of a new thing. The software we have can run against the network. So we could even provide them for the butterfly network if we wanted to. But due to the constant resets and that primarily being a development network, it's not something we're necessarily looking to provide. Usually networks short enough that people can just think up relatively quickly. So if you want to find more information about this, there's an announcement post in the Filecoin Lotus Help channel at the moment. This links to all this information, has some information here, it links to the public Notion page. We also have a pull request open to the Lotus documentation to add a section referencing this new information. We are looking to, this is a soft launch that we're doing, and then we're going to be looking to deprecating the current existing snapshots that are in the Filecoin chain snapshots fallback bucket. Primarily, this is to give Riva his time back and allow him to go off and do better things. So hopefully, we'll be able to take this over and provide a good service to the community. In terms of some of the improvements, like I said, one of the big things we were looking for is monitoring, having some information. We're still working on this. But at the moment, we have dashboards that we can keep track of the operation of these systems. So for example, this is the mainnet service, or we can see when the last job started, when the next job's being scheduled, then how long since the last job ran. These are the jobs themselves in operation, and this time span is the how long the job took to execute. And then we can see that kind of same information here in these graphs showing how long these have been running. Along with that, we can see the nodes that these are operating against. The way this is designed is to operate against three nodes or more, and it round robins between the nodes as much as the can to allow for the nodes to recover after producing a snapshot and to reduce load on any single node. So far, we've had pretty good uptime in terms of our operations. Everything's relatively nominal here. And it's been running for about two weeks in its current deployment. Thank you.