 Okay, so this is an introduction to CryptoEcon Lab research, and the agenda for today is I'm gonna give a bit of an overview of the research that we're doing, and then I'm gonna give a specific example of something we did a couple of weeks ago, and then I'll do a bit of a recap and reflection, so it's pretty simple talk. Okay, so to start off with our research overview, I want to talk about the different people who are involved, so in kind of alphabetical order the guys involved are Axel, Alex, Maria, myself, Vic, and ZX, and if we start with Axel, he's really the kind of mathematical and kind of analysis and expert, very creative and a lot of good ideas. Alex, he's our technical program manager, so it keeps us all kind of playing together, keeps everybody in tune and in line. Maria's just joined us recently as a data scientist and research scientist, and she has a background in blockchain and a PhD in statistical analysis and time series methods, picking out anomalies from time series, so I think that will be really cool to see what she's doing. I think she's going to be working a little bit with perhaps with some of the project Saturn work, which we're going to hear later from Patrick. So at the moment, I'm mostly focused on the simulation side, but I try to do a little bit of everything, and Vic, you've already heard about Vic, he's the kind of business and finance aspect, expert in research and analytics, and ZX. So ZX is the guy with the vision, he's also our workspace admin, he's got a separate picture, it seems, and in terms of other people joining, we've got Dave, who's already here, and Ryan Pablo who's joining us in another couple of weeks, and I think there might be more people soon and who knows next. So these are the kind of key people who are involved. What are we actually doing? So so I can try to break it down into some kind of principle themes and look at different aspects of this. So I mean there's a lot of overlap between these things, but I mean one way I like to think of it is an aspect is the kind of core developments research. So this is the research that Filecoin and Protocol Labs is doing that's really pushing the protocol forwards. So it's things like Filecoin virtual machine and scaling solutions. So of course there's teams driving these things forwards, engineering teams, or cryptography teams, but of course we interact with these guys a lot. So that could be, for example, with hierarchical consensus for scaling, which is what Axel is working on a lot of the time at the moment, really intensely, and he's going to be talking about this a bit later. So there's a lot of really interesting questions that turn up whenever you think about scaling solutions on Filecoin. I mean the kind of interesting questions are, okay, so like what should the collateral look like, what should the gas model look like if you've got this kind of hierarchical structure? Where each, where the root net, the L1, can spawn subnets and these can spawn subnets. It throws up a lot of very, very difficult questions. As to what, you know, what are we trying to achieve with the gas model? Is it DDoS resistance? Should it be to support total locked value? Should it be to have minimum fees? You know, there's a lot of questions here, which Axel will get on to later in a lot more detail. Another thing we're kind of working on that's a kind of key theme is baseline research. So this is the kind of key parts of the protocol now, keeping these things running, understanding all of the state of the network really well, how all of the different components interact, and how they might change in the future if we were to tune or update any different parts. So this is things like gas collateral penalties and all of this kind of stuff feeds into things like circulating supply. So this is something we're looking at quite a lot at the minute, different scenarios in the future if we were to adapt to anything. Another key theme in our research is ecosystem elements. So this is kind of more on the applied side, building on the fundamental stuff. So this could be things like perpetual storage, which we've done a lot of work on in the past, trying to think about how does one come up with a price to store something forever. It's not an easy thing, but you know, there are ways to do this. And now this is actually going into production and there are teams who are making this happen. So that's really cool. And then another thing that's, of course, extremely interesting to us and kind of motivates us in the long term. And this is something that Juan talked about last month in Amsterdam at Shelling Point, is this kind of this vision of Pareto-Topia and this interplanetary mechanism design. And this is really part of CryptoEcon as well, but this is very long term stuff. You have to kind of put in place the building blocks. And that's what we're doing now. So having storage and having things like Filecoin Virtual Machine to enable user programmability. I mean, there's going to be massive things that come out of this that will allow us to kind of affect the world in a quite substantial way. But the first step in doing this is being able to organize and understand the data to create it and make it available. So that's what we're doing with projects like Project Atlas, which Ishan is going to talk about later. So that would be cool to hear. And so another kind of theme I wanted to touch on a little bit is is governance and to an extent you might think this is a little bit outside of the wheelhouse of the kind of simulation side of of CryptoEcon lab. But I would say you're 100% wrong if you think that because governance is everything. And it goes right from control at the very best line level, epoch level, right up to what we traditionally think of as governance in terms of FIPS or EIPs or BIPs or whatever. So I mean, something Vic and I were talking about a couple of days ago, which I think is a really interesting idea is this kind of concept of an intermediate timescale governance. So not necessarily the control at the epoch level and not something that takes weeks or months like FIPS, but something somewhere in between and using ideas from statistical machine learning to try to try to select optimal parameters for the state of the network. So if we can define some kind of concept of network health and utility, we can treat this as a black box optimization problem. We can fit some latent probabilistic function and optimize this on a kind of week to week or even day to day basis and have a human steering level on top of this to kind of approve parameters. So, I mean, this is one potential idea. I mean, and there's so many other different things, but I think there's there's a lot of work to do here. So I just just wanted to highlight that. OK, so moving on to the next thing. So another big focus at the moment, right, so listen, I've told you a bit about the people, I've told you about the projects, there's different aspects are sorry, I've told you about the people and I've told you about the themes, I can tell you a bit about the projects. So some of the projects that are receiving attention from us at the minute, they refuse so axels on the HCC stuff very intensely amongst other things. We're looking at a couple of other projects, but one of the big things at the minute is this digital twin development. So, I mean, as the exercise, it pretty much everything in Filecoin has already been simulated and churned. And this is how the parameters are picked. And this was done a year or two ago in Austin, but there's always scope to have more models, better models, faster models, the kind of wisdom of the crowd. If we can get many models together, maybe we get a better view. So this is something that we're working on at the minute is kind of extending our digital twin capacity. So there are kind of some key principles that we want when we're doing this. We want something that's kind of general that can tackle any aspect of the protocol, something that can be modular and extended easily. And ideally something that's composable and written kind of in a pure style. So this is why I'm highlighting jacks, because this is something we'd really like to do if we can rewrite our digital twins simulation capacity in this kind of pure form without any side effects using jacks. That this will be very good because it'll allow us to differentiate through our whole models very easily, which is very good if you want to implement things like Hamiltonian Monte Carlo, because you want to do MCMC sampling just for fun or because you want to learn something about the system. And because perhaps you have some kind of concept of rationality and you want to optimize some agent's behavior in the future. So this is the kind of thing that we're working on at the minute. And as well as pushing our digital twin capacity in this direction, we're also trying to extend things like the sector exploration model. So this is kind of tricky to model actually because it gets a bit complicated because whenever you have a sector, it can expire. After 180 days, 540 days, whatever sometime in between that, and it can be renewed and the kind of the pattern and the distribution of these renewals is changing with time. So there's kind of a bit of an arc to it. It's like, OK, hard details. Do you make it? And a model shouldn't necessarily be a photograph. There's some degree of simplification that's necessary, and this gives the model the kind of generalization performance for the future. So there's a bit of an arc to picking this. OK, so that's a bit of an overview of the research that we're doing. The next thing I wanted to give was a specific example of a little bit of work that we did a couple of weeks ago. So I'm not sure quite how much time I have left. So if anybody can give me a shout when there's like two minutes to go, that would be much appreciated. OK, so a little bit of work that we did a couple of weeks ago was trying to assess FIP0032. So FIP32, this is quite an interesting FIP because it introduces new accounting that's going to happen for Filecoin's gas, and this is brought in to kind of pave the way for FEM in a few months down the line. But it's actually going to change the gas usage in Filecoin right now. So it's quite important to understand this and understand what the consequences might be. Five minutes. OK, nice one. Thanks. So we were asked by the FEM team to take a look at this. And OK, so the first thing we think of is OK, so you're going to increase the gas usage. So what's going to happen? Well, you can think about the base fee update rule. So ZX already mentioned there's this base fee update rule where the protocol is targeting some level of block fullness. And if we go above that, right, the base fee is going to go up. So that seems to be the kind of simple conclusion. But maybe that's not going to happen. And if we look at the gas chart here from Starboard, we can see that actually most of the gas usage comes from ceiling. So this kind of brings us along to the next point. OK, so actually we can batch. So probably it's not going to be a big deal at all. So maybe this is a little bit of a non-issue. So what if the metering increases? Miners can just increase their batching, so it's no big deal. But there's actually a little bit more subtlety to this. So if we look at what the changes actually are, you can see the changes to the gas usage are listed here. Now we can try to look at them in a little bit more of a digestible form. OK, yeah, so you can see what happens exactly to the gas multipliers here. So you can notice that the gas multipliers are actually different across the different commit gas types. So they're different for pre-commit, prove-commit, and they're different for batching, pre- and prove as well. So this is kind of interesting and it brings up a slightly subtle point. So if we think about when it becomes rational to batch, this is actually something that might be affected because all of the multipliers, they're not raised equally. Some of them go up a little bit more than others, which actually changes the rational batching condition. So what do I mean by the rational batching condition? I mean at what point does it become rational to batch as opposed to just commit with a single proof, which is given by these expressions down here. And since the batching expression accounts for both single and batch proofs, it depends on the ratio of these numbers, which has changed. So I feel like I'm getting lost in the weeds to use the X's expression. So we'll move on. But the key point is there's a consideration about rational batching. So what actually happens is the following. So we can compute the rational batching decision boundary. And if we do that, currently we're at this orange line, the orange line in the bottom left-hand plot. So that's the situation right now. What happens post FIP32 when it introduces these new multipliers? What happens is that we move up to this blue curve. And because the multipliers, there's actually a whole range of potential values that they could take because of the different types, different ways in which the operation of the protocol can express itself. Because of this, there's actually not one single rational batching curve, but instead we have this boundary. So you can see the kind of key point is, okay, so it's moved up, but it hasn't moved up substantially relevant relative to the variation in observed behavior in the back test. So that's the basic takeaway. So the basic takeaway is there's a change, but it's probably not a statistically significant one. But again, this depends on what you understand by the concept of statistical significance. Nonetheless, the kind of key question is, should we update the parameters in the system? So based on a kind of derived batch balancer parameter that we can derive using certain principles, whenever we've analyzed these new multipliers and the particular distributions that they can take, we find there's not a significant difference. So that's our kind of key finding. There's not a substantial amount of evidence that we should be updating any of the parameters in response to these changes in gas at the minute. But we can look a little bit closer and do some scenario tests. Okay, so what if the back tests are a little bit off and for example, batching gas is actually going to be two times more expensive than we think? Well, then this would actually shift the rational batching curve quite substantially. And in this scenario, we might be a little bit more concerned. So the kind of key takeaway is we don't think there's going to be any substantial changes as a result of this, but if the back tests are not totally representative, then this is something we have to be monitoring pretty close and we will be going forward. So I think that is really the last point I want to make and those are the kind of key things. So I will finish up there. So I've given you an overview of the research. I've talked a little bit about some specific research and I was going to end on a little bit of a reflection just to say that there's plenty of room at the top and to show this image. So this is an image. This is actually where we are staying in Amsterdam when we had our first crypto econ lab meeting last month or so. And this is the site of the world's first burst or stock exchange. And I just think it's kind of like a good resonance with where we are now in the sense that this is kind of technological improvement and it brought much, much prosperity to Amsterdam and made it the city it is now. And I've had a kind of feel there was like a good resonance in the progress that we're making and in the general direction things are going.