 So, on this presentation, we are essentially going to do a time travel three years ago. Back then, I worked with BlockScience, a complex systems engineering, and the challenge that we had at that time is really what should be the economics of Filecoin for the next two years after Mainnet's launch. So, Filecoin was not launched yet. We still didn't have testnet, so we didn't have lots of data, so there was a lot of work collecting the requirements, trying to get into assistant that would be functional for the participants, and it was a lot of work, so we didn't have crypto collect back then, and there was a lot of effort by me, Zargham, ZX, David Dade. So, going over everything is too long, so what I'm going to do is Hatterton going across everything. I'm going only to talk about two things that we did support. One is about the baseline meaning, how we did get into there, and second one is how we did inform the economics parameters before the Mainnet launch. I mean, what were the simulations, some characteristics, and how we did measure what parameter to choose, and yeah. Starting with the first one, which I think is a good example of crypto economics through as a supporting design work, is really about how we did come up with baseline meaning. And I think it's interesting because three, four years ago, when we would think about cryptocurrency issuance, the default option was always the exponential function that was inaugurated with Bitcoin a decade ago. That was always the option that, I mean, someone would consider. But doing several things, I mean, it was not appropriate to Filecoin to adopt that. First, because it's not the goal of Filecoin to be, let's say, that sort of value. Filecoin is really a utility coin, something that allows exchange, allows, let's say, people to provide services and so on. So, the logic that we adopted was different. Initially, when we did try exponential meaning as just like Bitcoin, there were several issues. Like, for example, you would have disproportionate rewards. So, by the way, we still have exponential meaning of Filecoin. It's called the simple meaning. But there are some issues, as I said, like, for example, disproportionate rewards, if you would have, for example, a downturn, like, for example, lots of miners shutting off temporarily because something has happened on the reward. Or some, or other aspects, like, for example, first in, first out, like, behavior, where people that get on the network initially, they get disproportionate rewards. And it was kind of interesting because we did experiment with several forms before getting to baseline meaning. Like, for example, what if we use a sigmoid Hatterton explanation or logistic rewards? There are some links there with some notebooks that I'm going to share that we could explore. But in the end, we got into baseline meaning really by talking to storage miners, trying to align incentives, trying to come up with something that could be adaptable. So, baseline meaning is really the set of those four equations. Going over super shortly, first thing at the core as we cancel the upper left, baseline meaning is really exponential meaning. But Hatterton using the real time, it uses what we call the effective network timing, which is, let's say, a utility driving time of the network. And it's always lagging the real time. And the way that, and this network time is the theta on the lower left that you are going to see on the screen. I mean, there is a bit of math there. I'm not going to explain everything because on that case, I think having some visualizations can help a bit. But the main point is that what baseline meaning do is? First, the effective network time lag before we have this baseline crossing, which is defined by the target value of the network power. When the network power is below the target, the tendency is that, let's say, the baseline rewards, they are going to be below a certain share of the rewards. So we can see on the top figure on the center that before the first baseline crossing, the share of the rewards were below 70%. And as we cross up that target, what happens, it starts to balance. It starts to equilibrate so that the baseline rewards is going to be 70% of all the rewards do it mining. And of course, if we cross the baseline down again, what is going to happen is that this share is going down. So the simple explanation is going to have a bigger share. And why this is useful? One figure that I like a lot is the lower right figure, the one that you see mining utility versus timing. So what is mining utility? Suppose that you have something like, I'm a storage miner, I'm participating since the start, and I'm going to accrue file coins every point on timing. And what would happen, for example, if you would adopt, for example, a Bitcoin-like minting function? The Bitcoin-like minting function would be the purple line, which is always one. And the mining utility is how much you gain compared to that simple function. And we have two lines in there. We have the green one, which is what would happen if we deactivate the baseline minting. And the headline, the solid headline, which is what happens if we actually have the baseline function. And what happens is that, for example, if we have a shock where the network power goes down, the baseline minting, essentially, is saving a bit of file coin during that shock. And after the shock passes, after the network power recovers, this file coin that was saved is going to be delivered back. So in a certain sense, the baseline minting is really a savings account to make sure that if we have a shock, we do not give out the file coin allocated for that time. We are saving a bit so that we do not compromise on the long-term health of the network. And yeah, that was a quick exposition about the baseline minting, which I think is a great case study. Another thing that I would also to go over quickly is also how we did support at the economical parameters during the main launch. That has been actually a lot of work. I'm just going to skin over the surface here. But the main thing is, once we did have the baseline minting set up, once we did have the definition of how would be the collaterals and so on, there was this question about, okay, the baseline minting. Should it be, for example, associated with 70% of the rewards, 90%, 50% of the rewards? The collaterals that we would need to lock up in order to onboard the sector. For example, should it be seven days, 20 days, 60 days? Or the consensus collateral? Should it be 10% of the circulate supply, 30%, 50%? Back then, we didn't have those answers. And the only way of making sense of that is really by first, testing a lot of scenarios, scenarios for the file coin price, scenarios for the onboarding, scenarios for the deal marketplace demand, the adoption of file coin plus. And also, in order to be able to compare and select, we would need to, let's say, keep in mind that we had several design goals. So, for example, one of the first goals that we ever had is actually rational to participate in the network economy. I mean, if you are a storage miner, are you going to collect profits for that? So, that was always a key PI, which is really the NPV of the participation. But there are other things, like, for example, what's your incentive to drop a sector? Or, for example, the network security. How much economically it costs to meet, to obtain 33% of the consensus power and make an attack against the network? And, of course, we had, we had autos free parameters. So, for example, the sector, how much time the rewards would be locked up? Should it be six months, three months, or two months? Should we release it immediately? Should we start investing immediately? Or should we wait a bit? And at the same time, we also had this feedback loop, because as we were trying to surface those parameters, we would also have changes, like, for example, introducing the alphabet or filter, or, for example, the consensus pledge was something that was proposed as we were understanding the system. So, yeah, there is a lot of information. We actually almost have a book about that. We have this giant documentation here, which is about 100 pages. And long story short, we did get some recommendations. Those were it. And it's kind of interesting because the parameters that we were helping to inform, they had different effects on, for example, like, for example, do you want to maximize network security? What do you need to do? Do you want to maximize mine profitability? What do you want to do? And it was a matter of fact that the effect of those parameters on those different metrics, they were no linear and also not necessarily correlated, like, for example, the consensus pledge. It is a factor that, let's say, increases a lot the network security, because it increases a lot the network cost to attack the network. But in terms of the immediate minor profitability, in terms of first order effects, we did detect that it could be a contentious issue. But of course, given that there are no linear interactions, not necessarily it's something like you double the number and you double the effect. Sometimes you double the parameters value. And let's say the effect is going to be increased by 20%. So it's very nonlinear. And, for example, one thing that we did do in order, for example, to not compromise on the minor profitability is to, for example, is to be more flexible on the vasting cliff duration. Back then, we were proposing to be two or three months before test net. We did drop that to zero, so that miners would have immediate rewards. And this made things a lot easier for them. And one way of putting those recommendations when we did out of those simulations is because, even though we could think about, let's say, a combined goals scenario, and there is a lot of arbitrarity on how we define that. In the end, if we if we compromise for network security, we are going to have a different result if we compromise, for example, for having liquidity or for having reliability. There are a lot of commonalities, a lot of synergies, but there are a lot of things that they do not quite agree. And depending on how much specific weight you put to each one, you can have the combined goals scenarios. And, yeah, those were the results and speaking about how we did get into those numbers, this is a long story, I could do a workshop just for that, but we did have this macroeconomical model for Filecoin, which, the way that I will describe is it includes the main macroeconomic mechanism of Filecoin, the collateral, the idea of having investing, also baseline meaning. It is sort of like, so it is what we call a superpopulation model. Hattertang being, for example, just one population or being an Egypt based model, what we did do is talk, let's say, we have, for example, this part of the population which are your capacity emitters, this part of the population which are, for example, the omag makers and so on, and we would have several scenarios on which they would interact. So on some of the things we would put, for example, some plain signals, on others we would, for example, introduce profitability criteria in order to compute the future trajectory, and, yeah, and it was large-scale simulations in the sense that, I mean, for each parameter, we would have, for example, 90 cryptoeconomic parameters. For each one of them, we would test, for example, tens of combinations, and we would take the Cartesian product of that and take into consideration, for example, about 50 filecoin price scenarios, and also we have these sub-populations which would be for, I think I did some time, I think I did the calculation of the data set that we did generate. We did have, let's say, 70 variables per time step, and some of the variables actually, let's say, on each point in time, you would have a time series. So we did calculate the imaginary time dimension, because a time series, you would have a time series of a time series when we would get the result in data set, and the scale of the simulation was on the order of terabytes, and it's a bit difficult for me to say how the input was like, but I can provide some of some cuts. So, for example, this is one cut, because one thing that we did in order to compute, let's say, the effect on those goals was really to define a series of KPI's of interest. So, for example, one KPI of interest would be the collateral fraction, and on this slide, what we are seeing here, on the left, you have the first home, the first column, that are certain input signals that we didn't code on the simulation, like, for example, how much network power was associated with the capacity commuters, and how much demand we would have, and based on that, we would have all those possible future trajectories for the collateral fraction over time. This, you certainly better that you are seeing is aggregating, essentially, over the price scenarios, and also the environmental parameters, because we would also include some Stokesky scenarios, and the colors that you are seeing, for example, the orange one and the blue one, is our different economic parameters, if we did tweak, for example, the initial consensus pledge parameter, so orange one would be 30%, blue one would be 10%, and it's a bit hard to see on the left exactly what effect, but if we go to the right, we can also see what happens when we introduce those shocks on the signals, so this is a thing that we also went through about, we didn't do a lot of qualitative analysis, when, for example, supply would go down suddenly, or demand would go down suddenly, and so on, and it was so much data that we degenerate, so a fun part of that project is that we degenerate so much data that it was impossible to analyze through visualization, so what we needed, what we didn't need to deploy was actually using machine learning in order to compress the data so that we could interpret it, so this is an example, essentially what we are doing here, there is a certain key part of interest, in that case critical cost threshold, which is really about, so yeah, I'm not going to enter into the specifics, but what we did here is, we did talk the entire dataset, and we were interested in knowing the following, what's the decision tree in terms of the decisions of the economic parameters that would maximize the cost to economically attack the network, and by doing that on this gigantic dataset we could have two things, so first one there is the decision tree, which allows us to have to have a grasp about what are the decisions, but also the directionality of the effect which is important, but also by having a handle forest, by fitting a handle forest and getting the featured importance, we could also have a sense about what's the sheer importance of the specific parameters, because, just because the decision tree says that we should pick something, unless you know the magnitude of that, it may not make sense at all, so we did use those directions coupled with the magnitude, in order to be able to extract conclusions on those non-linear effects, and yeah, other things, so for example one thing that we did map out, based for example on the combination of all of those results that we did, is for example to describe how those economic parameters would interact with those specific goals, and yeah, some of things were, I mean, they were like the initial pledge factor, which is for example, so back then the names were different, so this is exactly the initial storage pledge. It was only important, for example, for the liquidity goals, so no matter the value that we would pick, it would be only relevant for the liquidity, it would not be super important for the other ones, but others like, for example, the consensus pledge, or how much we would release immediately, or the duration of the investing, they would have this interaction that sometimes would cancel each order and so on, and one way of summarizing those trace docs is this left figure, so it's a bit hard to parse, but the way that I want to summarize is the following, let's suppose that I want to maximize for a single goal, like liquidity, or profitability, or network security, and let's also suppose that there is a fourth scenario, which is I want to optimize for everyone at the same time, how much of a trade-off I would have, so what this triangle is given the color indicates how close I am in regards to the opma, when I optimize for everyone, and when I go to the edges, it's like I'm close to optimizing for that specific goal, and we have this thing that the bright place is close to the center, but at the same time you have this cliff, which means that, for example, if I optimize only for network security, the effects on the other goals are going to be, I mean, they are not going to be as optimized, and there is always this risk of optimizing and hitting a cliff where, let's say, suddenly you have an optimized system, so yeah, long things, so key learnings and conclusions, let me, yeah, so just sharing quick learnings here, so we went through this loop of ideation, formalization, validation, going to ideation again, we did go over this challenge that the model was changing all the time because, I mean, we would include a new mechanism, as we did get more information about that, that would, let's say, create new designs, and one thing that, let's say, I did learn is, I mean, the degree on how much you can validate the system is always a lot dependent, for example, if you include a proposal, how disruptive it is against base dynamics, because if it's not too much, you can simply use the existing model and get insights quickly, but if you change the rules of the game too quick, I mean, it is going to cost on time in order to, let's say, make good enough validations, and also one thing that we did learn is, having specific metrics that you can use that are very useful, they must be comprehensive, because if they are not comprehensive, you are going to have infightings, what seems to be optimal is not going to be optimal for other participants, and this makes hard to have consensus, and, of course, if you have two that are flexible, that allows you to tweak quickly the mechanisms, this is going to help a lot, and one conclusion that I would share is, and this is actually a lot of his own happiness for me, but everything that we did project three years ago, they did exactly as we did expect, so I'm very happy for that, we did project for a sustainable economy, one that would be robust, and every test that the network did go so far, it went exactly as expected, so I'm happy, super happy that, yeah, we did get at this point.