 So I'm going to be talking about algorithms as policy today. I'm going to start with a meme. Actually, we just heard that memes are important. So we're often talking about this concept code as law by Lawrence Lessig. And as much as I actually agree with a lot of the things that he says about this, I noticed that people hear law and they think about things as very absolute and final right answer. So I've been using this term algorithms as policy. And there's a blog article, actually, I wrote with a friend of mine about this idea that we've continued to expand. But what I want to emphasize here is that when we design algorithms, the logic is essentially subjective, or at least intersubjective. We don't have absolute right answers. They're not final and correct forever. And so I'm trying to encourage people to move towards a word choice or semantics that actually evokes the essential, we can change this if it doesn't really work for us. Maybe not too fast, because we need to be able to trust in the rules, but we don't actually need them to be set and perfect, correct, or right forever. And so building on that idea, I'm going to kind of explain to you this sort of way of thinking about the world that we use, which places the policy making in the sort of middle and the kind of macro level observed outcomes, I think of as system level behavior. And we have the individual level behavior that is often in response to the rules. And I wouldn't stress about this too much. It's a drawing that I pulled out of a paper I wrote a couple of years ago with a friend named Sherman. But to just give you a kind of more technical sense of what we mean when we say that it's policy making in our algorithms. We're making measurement rules. We're making resource allocation rules. These are things that might have historically not been done by computer algorithms, but even a procedure written in sort of prose actually generally describes how you decide, how you measure, who gets what under what circumstances. And this is actually the regime of policy making. I've done some other work recently where we try to figure out how to apply concepts from other fields to the way that we design software systems, especially software systems that sort of underwrite organizations. And so we call this a constitutional archetype. We say, well, can't change too much. If it's too mutable or too easy to change, it's easy to capture. It may have the ability to respond to a stimulus if something goes wrong and maybe you need to freeze something or completely make a change in response to an issue. But bluntly speaking, once a group grows beyond a certain size, if it's too easy to change the thing, it's going to probably just get captured. On the other hand, if you make something completely immutable, then it might work great for a while. But if the circumstances change, if the context change, or the needs of the community changes, it has no way of keeping up with that. You just have to basically throw it away and replace it or just let it die. And so just thinking in basic constitutional terms, we say, well, you can change it. But the way that you change it is limited. So it sort of regulates the rate at which the rules can change. And so we call this the constitutional archetype. There's another paper with also my friend Kelsey on this subject. And with that, I'm going to go back to reality check. So that was all very theoretical. It gives you a little bit of flavor of the things that I've been researching with some of my colleagues, friends, team members. But this is the happy atubster fire. Every day I've ever been in, and even non-profits I've been involved in, I've never been in one where, from the inside, it didn't feel like a mess. Some of them are incredibly productive. They fulfill their animating purposes. They organize events, or they put on youth outreach programs, or dot, dot, dot. Whatever this thing does, it does it. But when you're in it, when you're a contributor, remember the forums are full of arguments. The board of mutings, there's debates and sometimes real issues and sometimes a all manner of things happen. And I think, especially now where we're trying to build computational systems into our social systems, we have to remember that that dumpster fire doesn't go away. And I'm going to argue that it shouldn't, because it means that there's actually some information processing happening within that organization. If everybody agreed all the time, there'd almost be no point in having the organization. So I like to think of this as a happy dumpster fire. And I like to imagine that in the process of all that energy, something's getting done. And so we can look at the difference between the experience of, yeah, OK, maybe it's stressful. Maybe we're disagreeing, arguing, whatever. But if the organization fulfills a function, you can actually ask whether that function is being fulfilled. And if the dumpster fire is all consuming and nothing gets done, well, then we have a problem. But if you're funneling that energy, that enthusiasm, that effort into the productive outcome, which is the purpose of the organization, then actually, I'd argue that that's a success. And trying to squelch the dumpster fire of all that energy is actually going to do more harm than good. So with that, I'm going to talk about a specific DAO that I have been a contributor to. This particular DAO is the Gitcoin DAO. We've heard a lot about it, actually. But it happens to have some particular elements that are good for discussing algorithmic policy making. And in particular, we're going to talk about civil detection in Gitcoin DAO. But I'm going to start with a little bit of a primer, because the thing that we're dealing with right now is the civil detection for the Gitcoin grants program. And as most people know, Gitcoin grants uses a quadratic funding mechanism. It's a matching that gives you more match for many small donations than it would if you got a few big donations. This, however, incentivizes people to potentially pretend to be lots of individuals in order to magnify your match funding for the same amount of money. Well, the overall financial flows within this is that the match partners provide funding and they go into the match pool, and they want to fund public goods. That's what they're there for. And they've committed to provide money, but they also want that money to be allocated by the small donors. So the small donors come in and they go through the thing and they share things on Twitter and talk to each other and shield with their friends. And then they show up on the platform and they click through and they pick their donations and those people steer the funds. Except this is an open platform and pretty much anyone can log in and make an account and do it. And it's actually not that hard to make a fake account. If you're pretty naive about making a fake account, it's pretty easy to detect. But on the other hand, you could try to emulate the real accounts and maybe make a convincing fake. So in the early days of this, there was a sort of review process kind of at the end. And as time went on, it moved towards sort of a continuous review where data scientists working within the community were like actively building machine learning algorithms and trying to detect and reject the fake bots of the fake accounts. And so around the time that Gitcoin became a DAO, I made this map. It's one of many things you can find in the forums. And it was sort of an attempt to understand all the things that had to go on in order to fulfill this function. So if this is the purpose of Gitcoin grants, then this is that sort of messy collection of interrelated activities that actually have to happen in order to get it done. And one subset of that is the fraud detection and defense or the data science team that does the civil defense work. And it's already just one little tiny organ in this large system. And it's one that fulfills what I would think of as a critical infrastructure. It makes and enforces algorithmic policies. But it's kind of complicated. I mean, you've got all of this data representing all of these different identities, donating to these grants. You have a terms of service that essentially says what is allowed behavior. And you have algorithms that help determine the best estimate of whether a particular actor is compliant. And allowed to be part of the match funding. And it turns out, we don't actually know. Early on, shortly before Gitcoin became a DAO, we did some data science research and actually found that there were a couple clicks that looked like they might be symbols. They turned out to actually be new communities that hadn't been part of the Gitcoin ecosystem before. And so they came in through a kind of thin linkage. Like they learned about this one grant and a bunch of people had donated to it. So you get these little clouds. And it wasn't until some sort of ethnographic follow-ups, like literally talking to people, interviewing people and figuring out who these people were and where they came from, was it possible to discern that this fingerprint that looked like a civil ring was actually this new community, I forget which country they were in, but they had just learned about Gitcoin. And so part of the issue here is that we don't actually have a ground truth that we can know. We have something close, like we know that there may be a ground truth. Either you're a civil or not, but actually directly measuring it is sort of out of scope, at least not at scale. So this kind of program got put in place where we have to clearly define what is meant by civil behavior down to a level that could be algorithmatized or mathematicized. We have to build algorithms to detect that fingerprint. Then we actually have to have humans sort of evaluate those results and continue to evaluate those results to make sure that our algorithms don't become sort of self-referential and biased in a way that we can't unwind. And then presuming that process is still considered legitimate, the sanctions are sort of preventing people's donations from counting towards matching. And so there's an ongoing iteration of this even within a round and between rounds to continue to improve the process. But I wanna highlight again the algorithmic policy making aspect of this. Machine learning is not like a thing where there's one right answer. In fact, it's one of my favorite XKCD cartoons. It's like, what do you do? Well, you add some linear algebra and stir it around until it feels right, until it looks right. I mean, it's not quite that bad, but the point is that there's a degree of subjectivity that you can't get out of it. And in practice, it actually takes quite a bit of expertise. So we end up with this odd dilemma. We have a need for technically trained resources to do a thing that is actually most recognizable as a policy making decision. And I'm gonna kinda double click on that and say, here's a really simple parameter inside of a binary classifier. It's, you know, you're trading off between preferring sensitivity versus specificity. It's like type one errors versus type two errors. There isn't really a better one or a worse one. You can pick to optimize for A, B or C, or actually anything along this spectrum. And just deciding whether you wanna error towards C or A further is a thing that, again, still no right answer. What's really interesting about this is that it's the same problem as this one. Blackstone's ratio is this concept that you better let 10 guilty people escape than one innocent person suffer. Now, if you think about it for a second, this is a statement about type one and type two errors for a binary classifier, which is a court system or a system that determines innocence or guilty, where you get punished if you're guilty. Well, that's essentially what's happening in this problem. You're saying, well, we're gonna make an assessment of whether you're guilty of being a civil. Whether we're right or wrong, we don't know for sure. And when we make that decision, you're gonna be punished or the public goods that you're expecting to support with match funding, they're not gonna get it. And the match funders who originally wanted you, the people to steer the funds are also being sort of shorted because they're not getting the benefit of all of these contributors. If the weights of some of them are being removed. So the idea here is that like, although it might be a bit of a stretch, if we really dig into it a bit, at least in the machine learning data science category, we're actually very much doing a policy-making thing when we are changing, iterating, evolving the algorithms that make these decisions. Now, I will say that the machine learning examples are easier to make this connection for, but it's also true of any other measurement apparatus that you're using to allocate rewards in other systems. It's also true of any of your smart contract logic. Ultimately, you're constructing a model of the world, you're optimizing against that model of the world and coming up with an algorithm that will work and probably work for a while because you designed it in context, but the context can drift relatively easily. The organization could scale up, the members might churn. There's a lot of ways in which the entity might evolve or grow or change that change the context and thus would change the appropriate policies. So assuming then that we have to do ongoing operations in some way in order to continue to manage these algorithms that are in a sense fulfilling a policy-making function, like how do we fund this? And I wanna pause for a second and point out that we almost always talk about funding the creative act, the initial build. There's a thing and there wasn't a thing and now there's a thing. But like most things that you really rely on, they're not done. You need to continue to care for them, maintain them, some cases operate them, in some cases governance is an active process that requires some measurement and oversight and refinement. So whenever we have a system that is not simply deployed and done, we have to ask questions about how we're going to fund the ongoing operations and maintenance of that thing. So continuing with our Gitcoin example, we have a rough overview of the financial power flows in the political economy. We've got paid contributors and volunteer contributors and largely they get funding from votes from the stewards. The stewards have voting power that's determined by the delegation of GTC to them. And there's some mixed incentives here. The fact that tokens are a proxy for all of the other stakeholder groups is not ideal. I'm not the first person to point out that token voting is definitely not the end-all be-all, but at least due to the way that Gitcoin did its airdrop at least early on, it was a decent approximation of the community stakeholders writ large. But I will point out that over time that may become less and less true. Or it may become more and more true. We'll know that maybe from doing data analysis and sort of understanding, if it got bad enough, we might want to imagine a world where Gitcoin changed its voting policy to one person, one vote. Actually they're putting a lot of effort into civil resistance and who knows? Maybe there's a point in the future where their civil resistance capabilities are strong enough that they decide that a policy change to one person, one vote delegated voting is better than token weighted voting. Not recommending that per se, just observing that the capacity to change is there. But continuing along this lines, recently the fraud detection and defense group put out their proposal for funding. And to be honest with you, it's a lot of money. In fact, I would argue that the size of this is possibly a little bit inherited from people's experiences with Fang companies. And there's a lot of extra money in those systems. And the general going rates, market rates for data scientists are quite high. And bluntly speaking, the community was like, hey, this is too expensive. But it unlocked a lot of debates about how much money is appropriate. And so the Joe, who actually put this slide together, and you can see his post if you click through that link, was basically arguing for breaking it out and looking at sort of the different levels of funding based on what would be done with that funding. And I thought actually this was a really productive way to go about discussing what should be done and how much it would cost. But in the end, the community was stuck with this. Basically, stalin or continuous improvement. Obviously it's a continuum, but especially going into a bear market, there was a real strong emphasis towards staying lean, which also left a lot of the people who had put a lot of effort into building this particular community data science team a little bit in the lurch, because they were kind of expecting to continue to do this operations and maintenance work. And the funding to continue to do that operations and maintenance work didn't materialize. And so amongst that community, there were some discussions. And this was a friend of mine, Danilo. He wrote this great post explaining basically that this is a totally foreseeable outcome that there would be a stress point between people who wanted to stay lean and people who wanted to continue to improve at civil detection. And there was even some discussion about whether the fraud detection and defense sub-dow would itself potentially want to become a dow. Is that a good idea? Actually, I'm not sure. I think there are some circumstances under which it might be a really good way for this evolution to occur, assuming it could become self-sufficient, continue to supply Gitcoin with some of this civil detection services, maybe work with other dows that need something similar. But at the end of the day, it's not exactly clear. There's a desire by the members of the sub-dow to continue to do what they were doing before. But since they didn't get their funding, there's a bunch of open governance questions about like what do we do now? Do we keep doing it even though we're not getting paid? Like what are the consequences of that? And so I'm actually very curious to see what becomes of this. I don't think it's determined at all. But I like to highlight the fact that I don't think there's a right answer either. Like the community could do anything from scrape together a minimalist team to keep it running effectively as volunteers to breaking out into their own dow and trying to maybe raise money or maybe find other payers that will literally just pay them to do the work that they developed. So the main thing that I wanted to highlight here is that we have this challenge for funding the commons or funding these, let's say human provided public goods that goes beyond the event of their creation. Even though we like to focus on that one because it's kind of sexy, it's a bit like in traditional infrastructure when the governor wants to build an awesome new bridge but maybe the engineering bureau's like, actually we should spend that money to maintain the bridges we have. Not a new problem at all. And so I'm inviting you guys to actually think about funding the commons beyond the moment of funding create the commons and to think about it as funding the ongoing operations, maintenance and governance of our sort of manmade public infrastructures. And I think we're creating a lot of manmade public infrastructures with our decentralized networks and the platforms that we build on top of them. And so if we acknowledge that this algorithmic thing is not a final end all be all answer kind of thing but a thing that is a policy making activity then we do need to provision resources to continue. And actually Evan recently pointed me to this great article and it's linked down here from coldtakes.com which points out these different governance structures and the reason I'm choosing to show it here is you can see that the governance structures for different things differ. Federal government here is complex and messy and my feeling is that we're actually moving towards more complex and messy things. I don't actually don't think that's bad. I think as long as those systems are capable of evolving rather than being treated as something like, oh, nope, here's the right governance design forever then instead we might see things like a sub-dow breaking off to become its own dow because that's the right evolutionary step. We also might see things die off because they're not really needed. We could see things change in complex and interesting ways and that if we are thinking about governance a little bit more like policy making then we can just acknowledge that that's actually not just okay it's probably desirable. And I'm gonna leave this talk off with a somewhat sobering thought which is this is a building in Miami that collapsed I think it was a year or two ago now it's hard to tell with COVID. But the thing that really stood out to me about this story is that everybody knew it was failing like they had meetings about like whether to allocate resources towards like the kind of maintenance that the building needed and it was repeatedly not ratified. So it's not just a skin in the game problem it's not just a tokens and online platforms problem it's actually really difficult to take collective action to maintain something. It just doesn't really mesh with the way that we think we want the new shiny thing we don't wanna spend money we're not gonna up our you know I don't know what it's not an HOA but we're not gonna up our dues to whatever the thing is that's actually paying for maintenance because the maintenance is decaying we apparently will just like gamble until it falls down and I really don't wanna see us doing that with our digital systems I think there's an opportunity for us to get ahead of this and actually bake into our culture not just valuing the commons or the public goods from a let's create them standpoint but to actually value those public goods those commons through a let's actually maintain them standpoint and so I will sort of end my talk there and take questions I hope that was useful.