 So for my project, the one that I managed to finish or almost finish was the minor lookup. So this is chart data. This is chart using data from panel locations and Lily, for those who don't know, Lily is a tool that follows the PowerPoint train and extracts like useful data such as minor info, messages, actors, states, and so on. And basically puts it in, not, sorry, not basically, but it puts it into a focused database without timescale DB extension or you can think of it as a timescale DB. And so this was a chart that I put there for somebody in PL a few months ago. And if you can see the geographic distribution of miners across the world. And yeah, surprisingly, a lot of them are, storage providers are from China, mostly. However, we only had geolocation of miners who thought supported their IP using data script from Lily. So in other words, we depended on the goodwill of miners to tell them like, to tell us what their IP addresses are and using those IP addresses, we would then use geoIP-IPFS to figure out like where in the world they are located. So there's a work in progress available in the link. You can clone the repo and check out the branch, create a post-care database with timescale extensions and insert some data from the mainnet using Sentinel archiver into the miner info table. And then you can run Sentinel locations and try it out. So what it does is that that adds geolocation of miners to the Sentinel database using only their PRID. So this is accomplished using the CAD DHT and the basic DHT crawler. So actually, usually you also need the state tree plus with the PRID to figure out which miner it is that this PRID belongs to. But because Lily already has this data script, I just use SQL to fetch this data from the miner info table. Because I didn't want to pull in state tree data just for this one little constraint or one check. Yeah, and in the process of doing this, I also implemented rating process of a node through a bootstrap here list, very similar to JS and PTP bootstrap, which is here in bootstrap.go. You can check it out, most of the work is there, like parts of it or a small part of it is in main.go of the same repository. So I'm thinking of actually maybe like pulling it out and putting it into its own repository because I noticed that this is like a process that a lot of people do to, or not to rail a node using bootstrap list. Just a before and after. So here it was the SQL query to get minus the self-supported IP address on the left. It's really straightforward and actually works super well, it's very fast. And on the right, we have the crawl happening using the DHT. Basically what happens is that the crawler uses the DHT routing table to figure out what peers we have found. And then I add the list of peers found to, I add the peers found to a list of known peers and basically process it with demarcation. So yeah, this is a really bad gift of peer IDs to their multi addresses and using crawl. It happens super fast though. I had to like slow it down. That's why like the quality is really bad. But yeah, if you clone the repo and run certain locations, it will look like this. So what's happening here is that peers are being discovered and it's being added to like the list of peers that we know that we don't have to process them again to look up their location again. Because we are doing this for a lot of peers and it can get really slow really quickly. Yeah, I think segmentation error due to a new pointer dereference. It's really deep in the stack trace. And it actually like, so I discovered that well, in the lesson learned, I looked at the library is large. I spent a lot of time writing my own color actually in the beginning. And so I didn't have this like a segmentation error. And then I discovered like two days ago that this crawler package was already existing and much better than like how I wrote it because I also learned to go during the lunch pad. And so yeah, but then after processing like it would hit the segmentation error. So it would not like commit the changes in database even though the lookups were happening. So yeah, and then add to the crawler. Yeah, a lot of the issues that I also had was actually like having the wrong protocol prefix for five coin mainnet turns out it's testnetnet. And I really had to go deep into like a Lotus code base to figure out what the correct protocol idea is for five coin mainnet. So that basically like the peer ideas could be verified and validated correctly in the DHT. Yeah, I'm keeping track of go routines. It's really hard. I think that's probably why I'm having the segmentation error. And so I'm gonna try and fix it today or tomorrow. Let's see how it goes. Yeah, that's pretty much it from my side. Thank you.