 And now on to our deep dive on Thunderdome. Take it away, Tommy. Thanks very much. So yeah, we called the project Thunderdome after the 1985 film. I won't talk too much about the film. I just have to say that the shoulder pads are probably an example of why you shouldn't project current growth rates too far into the future. So we're currently targeting the IPFS gateway use case. So we do this by spinning up so-called targets and firing traffic at them. We make sure it's exactly the same traffic. We gather as much information as we can when we do that. So we get metrics via Prometheus. The newly enabled tracing in Kubo, we scrape all that up if it's there. We grab logs and we're pushing everything to host it in for us so we don't have to do any work to keep up with the number of experiments. I've said no limits on how many experiments we have. Couple of lines of conflict to define them. Next slide, please. I should say it's me and Ian in the production engineering team that have been working on this in the last couple of weeks. So yeah, the first thing we did was make a tool called DealGood named after a character in the film who organizes fights. I tried to resist talking about the film, but yeah, they make people fight in a hemisphere in the middle of the desert in a post-apocalyptic future. So make up that what you will. So yeah, multiple targets, same load. Like I said, we could run it headless or in it with a terminal UI. So it's useful for local dev actually. It's also got tracing enabled. We do tracing propagation. So you can say this request and then trace it all the way through from the points of view of the clients as instrumented as the go HTTP lib is. We trace all that and then we can correlate it with Kubo's tracing. Also exports Permefius metrics and we can playback production load from a log stream that we take from the production gateways, which is minimally service impacting because it's just another engine X log file that's being written or from a randomly from a can list of your eyes. And there's the terminal UI there. It looks delightful as it moves. Ian's got an ASCII cinema demo that he published a week or two ago. Yeah, so that's what it looks like to define an experiment at the moment. We're limited to give us a couple of or not couple give us N Docker images and whatever environment variables you wanna set with them and we'll run the experiment for you. And we automatically on the left there is all of the tracing stuff. That's not any work we've done. That's just what Grafana tempo looks like. And that's all the default tracing in Go IPFS that's there now. There's a, I think there's a big seam of work in Kubo to instrument more and more things and become more and more useful. One of our first experiments is actually measuring how much enabling the tracing, what fraction of tracing you enable, how much does that impact performance. On the right there is a still from the demo video I'm about to play. And you can see the dashboard is automatically generated. So dashboards for free. And then you've got a one minute, 45 second video to play. Yes, that should start building things makes a ECS service for each of the back ends, the targets rather and deal good. So we should start seeing stuff appear here. And yeah, there we do. Peer and demo deal good. Peer and demo without peering demo with. So we should be able to go to our automated dashboard. We go to peer and demo. It might be a little while, it takes a minute or two for the containers to start. So you can imagine using Firebase at the moment. So it's got a, like a sign in network interface in the VPC and that kind of stuff. But very soon we should start seeing some things. Yeah, so there's a little bit coming through there. That's where there's reported. And that should refresh every five seconds of full screen this now because there's nothing else to see really. Starting to get some data through already. Times of first byte is kind of the most critical metric in terms of the user perception of the service and stuff. So we kind of centered that in this default dashboard. So we're saying that with is about twice as fast. So when you're peering is about twice as fast in the initial startup here, zero to experiment in end time units. We'll develop this dashboard further with a whole bunch of default metrics and stuff. My wife was saying that she wishes she could fast forward me and meet me in real life. So yeah, we've already got on the backlog a bunch of experiments. We want your experiments though. So like what's interesting to you? What would you, what battles would you like to create? So yeah, send us your suggestions. We want to get as many experiments going. The thing's only as useful as the experiments we've run. Mario's asked the question, where does it get the traffic from? It's replaying a trace log. You approved the PR a little while ago, mate. So various levels of soon coming soon, more production like. So at the moment we're saying the Docker image plus environment variables constitutes a different target. But of course, other things affect how well the infrastructure runs. What kind of disks are you giving them? What file system you're choosing? Are you raiding it? We want to finesse the UX. So it's an absolute delight. We want to automatically, if you've got performance enhancing branch, just track it, just keep deploying the branch, test for regressions, et cetera. RCs automatically should be tested. I shouldn't read the comments. Mario's making me laugh. Continuous Kubo, JSI, VFS, Ira, shoot out, bring your own hardware. So like we don't like our hardware options running it yourself, pointed at something you're interested in, run our side cars, get graphs automatically. And then one thing I'm mega mega excited about is the idea of like infrastructure experiments. So which load balancing strategies should we adopt? What kind of machine sizes might we use? Can we compare this infrastructure provider with this other one? What if we use a shared block store? What if all the nodes in a region had a peer store in Redis? Things like things above the level of individual instances of our software that do impact the performance of it. Because Kubo and anything implements in the IPFS gateway spec actually are deliberately designed to interact well with load balancers, caches, that kind of thing. So we wanna be able to like test them as well because that is the aggregate of all those things is our performance. So that's it, that's all for me. Thanks for your indulgence. That was a pleasure. Awesome. And everyone have a wonderful rest of your Thursday.