 A lot of thoughts on CryptoEcon. One thing I wanted to quickly mention is that there's a great book called GreenPilt that Kevin O'Walkie and the folks in the GitCon community have been putting together that has a very good introduction to a lot of the concepts about how CryptoEcon can create regenerative systems and build structures that are positive sum in the broader world and can tackle large-scale problems. So I highly recommend looking at that whole meme complex and exploring the world of CryptoEcon through the lens of regenerative crypt economics, regenerative finance, and so on. And the core message of all of that is that at the end of the day, all of our macro systems and our governance systems and so on are about how to coordinate large groups of participants towards some set of shared goals. And of course, there's like tons of layers of complexity there when you think about different preferences and different goals and so on, different values. But it's all kind of coordination systems at the end of the day. So if you kind of step back and look at macro systems that way, you can then start kind of decomposing them and figuring out what structures can yield better coordination principles, better coordination structures to achieve better outcomes together. And the kind of like amazing promise of our time today, and this is kind of why I like thinking about it in terms of solving planetary scale problems, is that you can use these kinds of structures, these kind of CryptoEcon structures to have extremely large-scale impact. And just to give a sense of how large-scale that impact can be, just think of something like the Bitcoin network or the Bitcoin storage providers. In a very short span of time, these networks have assembled massive-scale computing power and massive-scale service provider. It's a massive-scale network with an enormous amount of resource consumption and utility provided and organized by a few mechanisms. So what I wanted to kind of, I want to spend most of the time today talking about a core component in these, which we're sort of describing as impact evaluators. I'm not familiar with this kind of mechanism description being applied before, so we sort of coined it. But I kind of wanted to dive into together as a group into potential designs for impact evaluators. And then if there's sort of like time at the end, I want to kind of then talk about kind of how you can use CryptoEcon systems to kind of solve larger-scale problems. I give a talk like this at the last CryptoEcon day that kind of goes into Bitcoin Green and looks at that as a project and as a system that sort of tries to tackle a large-scale problem, breaks it down into smaller and smaller and smaller components, and then tries to create incentive structures to solve each one of those problems to then sort of vault over time into a larger, more coordinated group, solving each kind of progressively larger problem. So as an example, if you want to decarbonize the planet, you can start by decarbonizing an industry, and if you wanted to decarbonize an industry, you could pick something like cloud computing. You could then create a subset and say, great, let's decarbonize first Crypto cloud computing and start with a storage network, and then from there start with the Filecoin storage network, and each time as you kind of decompose a problem, start with a smaller set that is much more achievable, figure out the system structures, create positive sum incentive structures to achieve that outcome, and then use that as an example for then the layer above, right? So if we can get to fully decarbonize Filecoin, then we can use that as an example to other crypto networks to decarbonize themselves. Once we achieve that and create a strong incentive structure to get all of these participants doing that, we can then vault that into talking to other industries and saying, hey, look, the crypto industry has now fully decarbonized. It is net green, this is totally achievable. Hey, other industries, you go do it too. Once you get many more industries doing this, then you can slowly chip away at the problem. So I've come to think that CryptoEcon is kind of like the highest leverage set of tool sets that humanity has at the moment to solve some of the largest problems we have because it lets us break down systems, design mechanisms, and deploy them at scale quickly. And if they're working well, they can scale quickly into some pretty enormous scales. So the kind of picture that I want people to kind of consider is this, for any problem that you might be running into, try and understand what the incentive landscape looks like, try to think about what structures and mechanisms are causing, what kind of actions by what parties, if you are trying to get to a different outcome, think of the barriers in between, like what are the hills that are preventing, that are locking us into an inadequate equilibrium here and think of creating mechanisms and structures who warp the incentive field to get to that better equilibrium. And you can do this progressively, so you can start with one such problem space. You can think of using mechanism design as some tool that would let you either tunnel through a mountain in that incentive landscape or ideally you want to pinch the landscape and move it down, but that's kind of harder visual metaphor. And then over time, as you kind of solve more and more problems, you can progressively come up with other structures and so on and steer the planet into a much better condition. I think along the way what's different now about the world today post-blockchains from the world before is that what's really going on is that software is eating mechanism design. So think of the computing platform that we've deployed as building this extremely programmable environment where you can deploy any kind of software and deploy it to supercomputers running everywhere, including our pockets and our wrists and whatever soon to be all over. We have trillions of devices and so on and we have an extremely upgradable environment where individuals and small groups of people can dream up some new structure and deploy it out into the world and get a very high quality production feedback loop of whether or not that thing is good, how well it works and so on to the point where you can go from kind of dreaming up a superpower to then refining the superpower and enabling the world to have it very quickly, like in a matter of like a few years and that's unprecedented. Now when you can use that amazing software platform with like think of like CI CD and so on and what you're deploying is not just an app on your phone but you're deploying new coordination mechanisms for organizing groups of people at various scales. You get the ability to like rewrite the planet, right? You can rewrite what we're all doing and that's tremendously powerful. The current blockchain networks are just beginning to tap into the potential here. So this is why it's like super exciting to be studying and working on crypto problems. So I wanna kind of like talk through a very specific component that I've been, that a number of us have been like thinking about and so on and it's a very useful tool and we can use it in the blockchain community to potentially solve a bunch of problems. So there's a class of mechanisms that we're calling impact evaluators and the way that we sort of like this, we still have to kind of better formalize these but think of them as a simple mechanism that measures some impact and provide some reward. That's super general but that's kind of the point. These can have a different frequency so these can be like a one-time process or it could be periodic. This could have a scope that is proactive and or retroactive so proactive might mean that it assesses potential impact and looks ahead and places some bets and rewards it ahead of time. You can think of a grant system as an impact evaluator like this or it could be retroactive. So you can look at some outputs, measure those outputs, measure the value of those outputs and reward proportionally. You can think of composability of these systems. You can think of whether that reward schedule is fixed or variable. You can think of the incentive alignment between participants and very specifically the biggest crypto econ processes in the blockchain space are extremely simple impact evaluators that part of why they work so well is that they are very stable and whole industries can be built relying on their structure. So the Bitcoin Blocker Ward is probably the highest impact evaluator in the deployed so far and it's kind of like absurd how much happens through this. It's a very simple process. All it does and it's kind of complicated how it's implemented. Unfortunately, the Bitcoin community didn't kind of create a fully programmable environment and so on so you don't have like a really nice description of like here's an impact evaluator and like here's what it's gonna do. So you have to piece it together from what the protocol is doing but at the end of the day what's happening is that you have a process that is measuring the hash rate contributed per miner per block time and at every block time the IE is going to reward the miner with a probability proportional to the hash rate contributed and this is achieved by like doing the whole hash breaking and so on. And so in expectation miners are getting proportionally rewarded for their hash rate contributed and this impact evaluator alone has cost one of the wildest energy consuming processes on the planet. Unfortunately most of that work is wasted. We're building 5.0 and we wanna do really useful things with that kind of process but this gives you a sense of like how IEs work and so you can then assess kind of like what are the properties of this thing? Well it happens, it has like a fixed schedule, it happens every 10 minutes, it's running as part of the Bitcoin blockchain, nobody can change it, it's retroactive so it's not proactive and its variability is that it's an exponentially decreasing emission but it's kind of discretized and you can calculate it ahead of time so everybody knows what the entire reward schedule might be and everybody also has a view into all the hash rate contributed so far so people can make pretty reliable predictions as to what the future hash rate contributions will be from other parties and people can make pretty reliable predictions about their own contributions and so they can make very reliable predictions about their own reward schedules when you couple that with the Bitcoin price and so on then you get out of that the entire Bitcoin mining industry. In terms of incentive alignment this is zero sum so there's a fixed reward schedule and it just decreases over time. You could say that it's positive sum because there's a secondary process here where this IE is actually causing like the purchasing of Bitcoin so by feeding a lot of that economic activity into Bitcoin it's kind of like kind of positive sum but that's a whole other discussion. You can think of the Falkland block reward as a very similar impact evaluator. What it's rewarding is QA power contributor per block time. Here QA power is quality adjusted power that's where we include capacity and Falkland plus verified storage and it rewards a set of miners with that same probability and so over time in expectation the Falkland block reward is rewarding miners proportionally. The incentive assignment, actually there's the bug here it's mixed because of the baseline. You have again an exponentially decreasing emission in this case it's not discretized so it's every block or guess the discretized but every block is supposed to every four years and it's mixed because you have the baseline and the above the baseline behavior so underneath the baseline it's positive sum above the baseline is zero sum and so on. So what's really going on with these things is that there's a very simple control theory loop so if you are familiar with robotics or anything like that like you have a very straightforward structure for kind of there's some system that you can actuate and the output of the system is fed into a sensor. The sensor translates that into a feedback signal that goes into a kind of controller process and that gives some input into the system. So think of these kind of control theory loops as the kind of building blocks of impact evaluators where you can think of the system as like the network of participants the sensor as the measured output that you see directly in the information on the chain or something like that and then the controller here is the impact evaluator process that is deciding what the emission rate is or whatever. So this kind of like very simple structure could be used to reward all kinds of things in the network and this is one of these things that I think is vastly underutilized across our networks. So for example, and this is where I wanna turn it into more of a discussion with everybody I want you to think of different problems that we're dealing with in the FACMA network and think of starting to construct impact evaluators for these because once we have programmability, once FVM lands we can and even today we can start with Ethereum we can go and like create these in ETH and start rewarding directly that way. Think of some structure where all you wanna do is have some process that you want to reward and you wanna create a periodic signal that you can draw out to then give a proportional reward to participants for that process. Once you can frame it that way and declare it then you can kind of start releasing currency and at the beginning you're not gonna get much output but if it's reliable over time that will translate into a lot of impact. So maybe like think about some area of the FACMA network this could be like we want, hey maybe we want more crypto econ days maybe we want more crypto econ days to happen all over the world we want people to learn about crypto econ maybe we want other kinds of events maybe we want, we want to think about clients and client onboarding so we want to speed up the onboarding rate of the network maybe there could be an impact elevator around that there sort of is one already but it's not as powerful as it perhaps could be maybe we have, what's the data onboarding problem? Maybe we want to shuttle a bunch of drives from one place to another in physical machines how do we, could we turn that into an impact evaluator? Maybe we want more on ramp type things certain types of material to show up in the network could we create impact evaluators around that? And so to maybe take some examples I wanna kind of work with this group to take an example and break it down into something that we can create an impact elevator over because if we get good at this then we can steer the whole community towards larger scale problems so who has like a candidate problem that you wanna solve in the network? Raise your hand if you have a candidate problem you wanna solve, even if you don't know how to solve it yeah. I mean it's kind of one that you mentioned but onboarding and just to kind of put a finer point on it so we've been, me and Steph Makdalinski at the Valkyrie Foundation have been have been trying to work out some ways of understanding this and creating metrics for essentially for usability, right? So one of the things that's hard right now because it's not, we're not really incentivized directly to tackle it as an ecosystem as a whole is simply downloading, installing Lotus, setting it up and joining the network, right? It's hard and it's not, it's no one's fault that it's hard, right? It's just that like there is no thing in the system that specifically rewards that. One of the things that we can do and we do do and in some ways it's bad is we have multiple funnels to get to that point, right? If you search for how to install Lotus or how to get Valkyrie running you'll hit five or six different ways of doing it. So one thing that we can do is pull out like we have a measure, we can measure between those, right? So we could pick out something in those systems that says like one of them is like, okay, now going to this Valkyrie Foreset and pick up these things and we're like, oh, okay. We can now detect that someone is going through this funnel and we can also see whether it trails off. So anyway, that's 3% of the way through the kind of thinking that we would need to do that. Let's maybe, because that problem is very broad, let's try and like distill it down. How about like onboarding large clients? So there's some set of clients out there. Right now we don't know, we don't have like a measure in the chain of who are the potential clients. We could get that. So we could get some information to feed that into the chain. And we don't know what their user experience is, right? We don't have the way to rate the UX of onboarding the data into the network, the way that you get to rate a lift car or an Airbnb stay or something like that. So you could get some data, some like qualitative data, some quantitative and qualitative data from clients and then integrate that into some feedback signal and feed it into some periodic reward schedule. So how would we like, what's any ideas on how we would do that? How we, like what do we need to add to the system to create a structure like this? So we need some treasury with like some incentive to go towards some set of participants that are gonna demonstrate that they've helped onboard clients. We have, we need to get some data from clients somehow and feed it into the chain as well, some qualitative feedback output. So who can think through how to piece these things together? You need some verifiers to say for the people who report that they onboarded clients that they're telling the truth. Yeah, so you do need to be able to verify that the clients are telling, aren't who they say they are and are telling the truth. You can- There are some cases and resolutions later. Yes, yep. So you need, you need like the feedback signal that you're getting needs to be verifiable. So that's certainly true. But even like, even before we get there, how can we compose these signals to produce an impact evaluator? You can measure the savings and start your staff by switching in the blackboard. You can measure the savings, yeah. True, so that would be like a competitive analysis type of thing that one client, that you could advertise to the whole world. Once you're doing it well, you can advertise that to the whole world and draw more clients. I'm thinking even more basic than this. If I were to today start programming a thing, what do I do? What do I write? Let's break it down into the component system so that I want to get us all thinking in like algorithmic terms so that then you can start deploying these things. I'm curious to measure how many new clients onboard. Great, so we need a set of how many clients are there out there, potential clients or actual clients and what their rate of change is, right? So we need to know who the clients are. We need to track them in the chain. Right now we don't know. They're not on there, right? Like we know sort of, but we know through mechanisms outside. We don't know directly in the chain. So we need a set of clients and we need to know how much storage they have an interest in putting in or how much storage they have already put in. What else? I don't know, we know like the, let's say we treat the data cap allocation that went out as like potential interest on the line. And then you treat the throughput of how much data cap is being issued from a change perspective. And then you also know how much data cap is being consumed. So you have two flows, the one with inflow of data caps after you're consuming the data cap. So it's how we expect that to be a funnel. So then we can establish, so let's say, So we have this like inflow of data cap of like potential clients' interest on the line, whether you went through this process, it get being captured on the chain. Maybe you're gonna onboard 10 petabyte. But at the same time, you also know how fast this data cap is being consumed, right? Like so, and you say, oh, how many deals landed on the chain and what is the percentage of that? But right now the data cap deployed to clients only sometimes is going to the actual clients, sometimes it's going to the SPs who are doing the work for the clients, right? So right now a lot of SPs are approaching clients and they're doing work with those clients and they're running the whole process for them, like all the following components. So we have to like get the actual intent from the client and how much search they might wanna bring in. Then if that's the case, I mean, I feel like we're moving off the protocol, then there are just a thousand, there's many, many ways to do that, right? Just like... I'm saying like, let's try and compose it into a protocol improvement. Like we could write a FIP someday and say, let's add these set of components to improve the storage onboarding experience. Sorry, I have a hypothetical, it could be wrong, like it could be off, but like maybe people can just like indicate their interest, right? Take some file coin and say, hey, I wanna bring this bunch of data onto the file coin network that could now have some indicator on the chain. Did anyone say something? So you have some indicator of interest and then you need to measure the output, like it happening and some signal, some qualitative or quantitative signal out and feed it, how do we do that? What might be a good way of like getting the feedback from the clients on how well that went? I have a different answer to a different question, which this reminds me of the Amazon affiliate program, right? Like you need some metadata there somewhere. This is sort of trying to cure the last thing, which was like, where does the data cap live? So right now we don't have anything on chain that says it came through this route, right? Like so maybe a tag that you would associate with data cap that was like it came through this funnel, right? So you have a destination. One of the larger meta points in a lot of what you're saying is that the client onboarding funnel is poorly instrumented in general. And we need way better instrumentation of how that data's coming in to then over time measure what things are providing what outputs and what quality, what quality, what kind of, where are people getting stuck and why and so on. So you have a point like maybe we should start by instrumenting the client funnel, figuring out all the different kinds of clients, all the different groups and getting reliable and robust information about their experience and how well they're progressing. And once we have that, then we can maybe piece it into something in fact about later. We maybe started with one that was like really hard. Important to us, but really hard. Yeah. I have a question on that. I mean, so in this case, the extra value that is created can actually be monetized in a sense, right? And it actually can be distributed across all the people who contributed. This is an easy monetization problem. One of the, I'm interested like, how would you do something like that if it's not as easily monetized? So, yeah, so that's heading in a totally different direction. So I'm gonna bring it back to impact evaluators. What I wanted this conversation to kind of get to is making things of this type because these things are super easy to deploy and once they're deployed and they're stable and people can rely on them and they have data about their performance, then people can start making predictions about the future and can start building entire businesses and industries over them. So, you wanna get to something like as simple and straightforward as this, deploy it and enable a large community of participants to bet on that not changing. So we gotta get to simple things like this. Maybe we can go from data onboarding to a different problem. All right, let's... What about the idea of like an auto renewal? Like if somebody's having a good experience, they just, oh, my 12 month contract is over, I'm just gonna auto renew the exact same thing. Like that would be kind of a vote. It seems like a verifiable mechanical thing that's happening that shows a vote of confidence or a thumbs up sort of thing. Yeah, so right now we don't have a good way of like getting after the fact feedback from individual participants and their experiences. And that's kind of like uncommon in blockchains, but very common in our day-to-day activity on tons of applications, right? Like car services, hotels, all of these things, browsers, like everything asks you for your feedback and like they start rating your output. And based on that, they use that signal to optimize their system somewhere. So maybe we should just, that might be like a very trivial ad. Like find a good way to contribute, to be able to pull participants for feedback. We have to make it verifiable in some way, to the point before. Like we have to make sure that data's of high quality, but that might give us a lot of signal. So how about, let's think about events. Suppose that we want to have, we're like planning next year's events and we wanna have tons of community organized events around the world. How might we design an impact evaluator to cause that? I guess you have to work out what you want from the events. And often it's about trying to bring people into community, hire people. So if you measure how many people you're actually hiring based on the attendance of the event, you can start to measure the impact. Yeah, so that's like a specific thing around, you have a certain set of qualities that you want to gauge about the event. You have one, you describe like one measurable output from events, like people hired. You can also measure attendance, you can also measure like locations, you can also measure like, yeah, do you wanna tell us a lot, you're more, yeah. Yeah, yeah, yeah, yeah, yeah. So attendance, other, other. Talk about media mentions. You can talk about recognizability of Filecoin or whatever the thing is that the event is. So we have a bunch of signals that a lot of different groups care about of what is a good event. And we need to be able to get some reliable enough measurement, it doesn't have to be perfect, but it has to be reliable enough to then feed back into a signal. So suppose that we can do that, suppose that we have these like set of measures and we can, after an event happens, go back and score them. For each one of these, I'm convinced that all of us can come together and come up with pretty good ways of getting that data out of the internet and come up with a score. Suppose that we now have a set of events that are happening with like some set of, you know, properties and their scores. How do we go from that into an impact evaluator? We look at the frequency that we're gonna measure it by. Yeah, exactly. So what frequency? Let's propose some. Probably directly before the event, directly after the event and, you know, like a month after. Yeah, so we could design an impact evaluator that as every event happens, measures that event. So you could have like a proactive grant-making thing that gives some capital for people to try putting on an event. You can measure the outputs and then reward it retroactively. How about we do something like what these block rewards do, which is kind of like, what's extremely powerful about these is that they're auctions. So the way that they work is that they basically walk, the impact evaluator walks up to an open network and says, there's this amount of reward. I will deploy this amount of reward in this unit time to proportionally to the overall total quality that I get out. And that is able to like enable all participants to put in whatever they think they want to add for whatever the reward is worth. I'm not saying that this, well, but like auctions tend to be like a really good way to kind of let participants in a network find like the best price. Yep. What kind of demand is like a lottery or something? So it's like, you want to maybe to be kind of as people that out of the speed back can enjoy this event. I had like this session, I didn't like that session. And the more feedback you give, the more like lottery tickets you get for some sort of reward at the end. So like might incentivize people to show up in the first place, might incentivize people to get feedback along the way. And it, everyone has sort of reminds me of the Bitcoin blocker or it's sort of like a, someone gets all the value, but everyone's motivated to take these actions. So that feedback would be all probably feed into one of the qualities that we're measuring about an event, right? So let's write that down. So I think we're talking about this in terms of like a very like automated system. And so like there's, you know, there's a curious on your thoughts of weighing the value of like a rapid prototype of doing something very like manually because you don't know what inputs you're actually going to care about. You want to just quickly go through a bunch of them. I'm thinking more about data onboarding right now. Verse, having something that is more programmed and automated that makes sure that you are not, like when humans are doing it, just like thinking about things, there's always the temptation to not be very stringent about what things you're evaluating. Where if you're doing it more programmatically, it's like more well-defined. What are your thoughts on that one? So I think that what's extremely useful about programmatic structures that are regular and dependable in the long term is that you decrease a transaction cost of entering into those conversations. However, they're much less friendly and forgiving, right? For example, in a grant program, there's all kinds of back and forth that are instrumental in refining grants into a good structure that is likely to produce really good value. So it's not clear yet that you can have a good automated grant-making thing that isn't like totally like, immediately produces like bad output and is like incentive misaligned and so on. So I think that there are a lot of things that do require a lot of individual involvement in kind of like human-oriented programs. But the transaction cost is very high. For each one of those things, you're gonna have a very high transaction cost of being able to interact, being able to describe the potential value and align together onto a good outcome. And what's extremely powerful about these like IE's in the cloud, so to speak, is that once you put them out there, participants can transact directly with the IE and you drop the transaction cost between all participants. And so you create a much more open environment. And so they don't work well for things that are very fuzzy and hard to quantify. So things that are really qualitative or where you are trying to align on like potential predictive value that they don't tend to work really well. But for very concrete things that you can turn into measurable, this like concrete quantified impact over time, they can work really well for that. So and I do think that we can, even with something like events that are so extremely qualitative, you can get some measurable outputs out of that and turn that into enough of a signal to know what events in a period of time were kind of like regarded by the community as like the most valuable, so to speak. And then out of that, you can like then create kind of an impact evaluator type thing. Relatedly, what are your thoughts? It's not just the ticker, but what are your thoughts on like overfitting then to data that is already measurable rather than creating new mechanisms of measuring? Cause you're sort of like always tempted then to just like use the- 100% of these overfit. So you get exactly what you reward. So like totally you like, you know, so the Bitcoin community did not want to maximize the amount of hash rate on the world, right? Like can't really speak for Satoshi here, but I really don't think Satoshi and the Bitcoin community sat around thinking, you know, how can we turn the entire plan into hash rate and like put every single computer into an ASIC. And so 100% of these overfit and that's a misapplication of an IE, right? Releasing this thing into the world created like this massive consumption of energy and like this wasteful process. And now it's like we have to fix that. And we have like this runaway process that we have to like go and fix. So we've learned since then, like we're making better things that are like now more attached to value. So for example, the Falcon Plus useful storage thing is like a huge upgrade on just like say just capacity or even that is a capacity is a huge upgrade on like wasted hash rate. So can we kind of like go to some things that like there are sort of some of the means where this kind of thing can be very helpful. There are many cases where they won't be and like you don't want to kind of incentivize something to overfit too much. But there are many problems with at enough scale that you can turn it into like a pretty good quantifiable output that you can then reward this way. They're not like what I think it's like for some problems you can use these for some problem, a lot of problems you can't. Yeah. Can we quickly just go back to Danny's question? And I'd love to hear your thoughts. Can this impact on boarding clients, right? Can we use like unique data set? Maybe we use some algorithm that matches and say if it's 80% different, it's a unique data set. And then the variability would be like increasing instead of decreasing because as you onboard more and more then it's harder to find unique data set. Would that be something that solves partially? Yes, so yeah, that's possible. I mean, I think like you could say, hey, there's a class of data that we want to onboard. Suppose that we say for the Atlas project that you'll hear about, you want to get like geospatial data onto Filecoin. And like right now you want to run an IE to get like pictures of the planet like added at different resolutions. So what you can do is like you can set up an impact evaluator to reward participants that contribute and that put into the network like pictures of the world taken between one time and another time. You end up in a hard problem where like you have to verify those things like you have to verify that was done correctly. But you can at that point reward all participants for bringing that specific type of data into the network and it will cause a lot of it to appear. But you better be good about like your verifiability because you're gonna get the wrong output. Like you'll overfit, like the network will overfit to what your reward says, not to the intent of your reward. Right, like this is where like the letter of the law really matters. If you get that wrong, like you'll get the wrong output. Thank you. For schedule? Yeah, no, no, we're good. No, no, I'm happy to like pause. I wanted to get people thinking about impact evaluators. I think like the non-trivial systems, what I kind of want to leave you with is like you can generally take these larger scale problems, break them down into smaller ones, and then if you can come up with a concrete way to measure the output and feed it into a periodic reward schedule, then you can like, then you can move mountains. But you better, to the point, you better like know what you're measuring. All right, thanks.