 So I'm Mark Miller. I'm a founder and chief scientist of Gorek and a senior fellow at the foresight Institute, and I've been working on smart contracts since the 1980s. And I've been working on it since then because it's important. And in this talk, I'm going to try to give you my perspective on why. But first, a slide on the history of the world since 1820. The red on this diagram represents the number of people living in extreme poverty. The green represents the number of people not living in extreme poverty. Back in 1820, almost everyone was in extreme poverty. And as the population grew, the number of people not in extreme poverty grew much faster than the people in extreme poverty. Lately, we've been seeing something even more wonderful. The number of people living in extreme poverty has been plummeting at an accelerating rate, even in absolute numbers, as the total number of people have been increasing. And even Pinker says, we've been doing something right. It would be good to know what it is. There's two phenomenon here that need to be explained. We can view civilization as a problem solving superintelligence. The number of people that it supports not living in extreme poverty, the green on this diagram, represent it's ever increasing superintelligent problem solving capacity. And the plummeting red on this diagram represent its alignment with human needs. These are both emergent phenomenon that need to be explained. How is it that civilization is both increasing in its superintelligent problem solving capacity, and that this capacity is deployed so well in aligned with human needs. Well, civilization is not an agent. It's not a superintelligent intelligence. It's not that it wants anything. It doesn't suffer or not suffer. It has no utility function. But it has a dynamic to it. It has a tropism. And that's what I'll try. I'll be talking about in this talk. Civilization is that which emerges from us from the choices that we're all making. So let's imagine the choice landscape among possible worlds. Over here we have in this diagram, every point in the space represents a different possible world. And they're organized spatially according to Alice's preferences and Bob's preferences. The red dot in the center represents the current world. We should remember though that the overall space we're talking about has seven billion dimensions to it. But most people don't know each other. Most interactions between one group of people is independent of interactions among another group of people. And any particular relevant choice landscape among possible worlds will have lower dimensions than the overall seven dimensional space, and these will proceed somewhat independently. So we organize the possible worlds vertically according to Alice's preferences, the ones that are higher are the possible worlds Alice would prefer to the current world. And we all organize them horizontally according to Bob's preferences, the ones to the right are the ones that Bob would prefer. You've probably heard the terms positive some and negative some, but not realize how peculiar a notion it is. We can think about preferences as revealed by the choices that that the various parties make the choices that Alice makes. And we can reveal her preference among various consequence. But the notion of positive some presumes that we can think of these preferences not just as a rank ordering but as a metric, and is and that we can furthermore compare these between parties that we can compare Alice's preferences and Bob's preferences we invent this notion of utility for preferences as a metric and then we imagine further that we can meaningfully add them together. So a utilitarian is one who sees a transition to the green above the diagonal, where the sum of utilities are greater than that in the current world any move from the current world into that upper right area would be considered to be preferable to a utilitarian. We can divide up that area into segments. The upper right quadrant is the is the segment containing the points, the possible worlds that are Pareto preferred by Alice and Bob to the current world. So for example, the blue dot is Pareto preferred to the red dot what this means is that at least one party prefers the blue dot to the red dot, and no one prefers the red dot to the blue dot. So these are the points that to a first approximation we can reach by voluntary cooperation, because somebody would rather move there, and no book no one should be resisting. The upper left is still positive some being a good utilitarian suggests to Bob that we should move from the current world to that blue dot, because, although Bob will be worse off Alice will be more better off than Bob will be worse off. But despite Alice's impeccable utilitarian logic, for some reason, Bob does not like this plan. We should expect Bob to resist. Furthermore, we should expect that Alice will expect Bob to resist, and Bob will expect Alex, Alice to expect that. So we've got a cascade of expectation of conflict, and that cascading mutual expectation of conflict leads to the Hobbesian trap, where each party, even if they weren't inclined to actually engage in a conflict might do so first, in order to get ahead of the other party is also known as the first strike instability and as we saw during the Cold War, that instability might even destroy the world. So, let's return to our happy upper right quadrant. And not all points on this quadrant are actually reachable by deals between Alice and Bob starting from the red dot. And some of the reasons are human psychology that with that blue dot move to the blue dot Alice does tremendously better off. And Bob just gets a few crumbs. So, even though Bob is still better off. He might not agree to that deal because he resents the unfairness of the deal being offered. And experimental economics has shown very consistently that humans will refuse such deals as it violates their fairness intuitions too much. So the green cone within the upper right quadrant is the sweet spot it's the deals that are more easily more easily reached by by voluntary cooperation by deals between Alice and Bob. Now we should remember that human psychology isn't as necessarily as much of a constraint going forward as it has been. We're not the only players involved, and we'll be increasingly sharing our world with art with our artificial intelligence descendants, which will have mind architectures incomprehensibly different than our own. But we will still want a world in which we can peacefully coexist and cooperate with them. The notion of interpersonal utility comparison. When applied to coexistence with incomprehensibly different mind architectures becomes incoherent. The whole notion of utility as a metric when applied to such mind architectures is incoherent. And a coherent notion universal enough to still apply is the notion of revealed preference that goal seeking entities make choices and services that are goals and so doing reveal preferences. And if utility is no longer metric if all we have is a ranking order than the diagonal line becomes meaningless. So we need to remember the distinction between the upper right quadrant and everything else remains robust. Furthermore, even without human psychology necessarily being a factor. The green cone is still the sweet spot for arriving at deals, because Bob, knowing something about Alice's preferences and trade offs might hold out for a better deal. There's still more interesting reasons why not every point in the parade or preferred quadrant might be reachable by Alice and Bob. And the main one that we should be concerned about and I would say the main one that this conference is about our collective action dilemmas, preventing us from making progress that we would like to mutually preferred worlds. And to explain this, let's start with the very simplest example of a cooperative deal a simple trade 10 shekels for a gore. But let's assume the worst case for bringing about cooperation, which is that Alice and Bob have never heard of each other they're outside of any cultural or legal context that would help them make a binding deal. They're anonymous and it's a one time interaction. But let's say that the deal that they have in mind is that Bob would give Alice 10 shekels and Alice would in turn give Bob Gord. So, there's a problem with this. When Bob gives Alice 10 shekels Alice's 10 shekels richer Bob is 10 shekels poor. And when Bob says to Alice, where's my guard. Alice, quoting Hobbes says, for he that perform at first has no assurance the other will perform after because the bonds of words are too weak. And Alice runs off with the Gord. And of course Bob anticipating that that would be what happens actually never offers Alice the 10 shekels in the first place. So this dynamic is the Nash equilibrium trap for both Alice and Bob, there's another world that they would both prefer to move to, but they can't get there through individual actions. Through separate actions without getting trapped the way Bob got trapped in the scenario that we saw. And we can think of that as a barrier where it's a step around the barriers to fall into one of these traps. So to overcome this long ago we invented technologies of credible commitment. Reputation like iterated games like promises like contracts like escrow. All of these ways are all of these mechanisms are technologies of credible commitment that enable Alice and Bob to effectively move forward to the world that they would jointly prefer together. However, our society is full of collection active action dilemmas, essentially the same structure with large we're now these collective action dilemmas might involve thousands or millions of people and play out over years or decades. And these things can be very hard to figure out how to surmount. And our civilization is to a substantial degree the evolution of norms and cultures and human institutions and political frameworks that enable us to solve the problems these are one of the main problems that civilization has learned to solve over time is how to build these bridges over these barriers so that we can jointly cross into worlds that we would all prefer. And what we're doing in our new industry crypto commerce with blockchain and crypto and smart contracts and especially what we're doing here at this conference funding the Commons is we figuring out how to use our whole new technology base that we've created. It's a whole new technology base useful for many things but especially for building bridges over such barriers bridges that were previously inconceivable bridges that have a degree of trustworthiness that was previously impossible with institutions simply built out of people. So returning to our theme civilization. Once again is not an agent, it doesn't want anything, but it does have a dynamic it has a tropism by tropism. We're making analogy to things like photo tropism where plants grow towards the light. And we understand the chemical processes from which that tropism emerges what I've tried to do in this talk in an incredibly oversimplified way, sweeping many many problems under the rug but nevertheless get at the essence of what the mechanism is such that each of us, making individual choices and participating in the evolution of our civilization. Bring about this tropism of our civilization into the upper right quadrant with ever increasing problem solving capacity that's well aligned with human needs. And we're just getting started. Let's figure out how far we can take it. Thank you very much Mark and I know Allison will also be joining remotely and I'd like to welcome to the stage Christine Peterson and Juan Benet. Please grab a seat. Let's give a round of applause for the panelists. Okay. Super. Thank you. It's great to be here. And can we get Mark Miller and Allison on screen? Is that possible? While we wait for them to join us. I have one here and I get to ask him any question I want. I'm going to push him out on his speculative edges which is saying something, right? Because this is a fairly visionary guy. You know we've talked about, all weekend we've talked about funding the commons, creating new tools. We haven't talked a huge amount about AI and now would be a great time to do that. And we're looking here for positive visions, right? Not the nightmare scenarios. We're all very good at that. We're not as good at generating really inspirational positive scenarios. But I have a feeling one might be able to do that. So tell us a little about how you see AI playing into all this. Things like epistemic assistance. Things like that. Let's hear about it. Yeah, thanks for the question and thanks for the prompt. I completely agree that we are in general very good at focusing on the bad things to avoid and we tend to lack well articulated positive visions of the future. So thanks for the prompt. So I think it really sort of varies at different time scales. So in the multi-hundred-year, a thousand-year, a million-year time scales, we're talking about a total different civilization or species evolution or really a phase transition into something very different. So it becomes extremely difficult to speculate even a thousand years out because the kind of improvement rate that we're in means that we might be able to have beings that are able to be fully digital or you might have different forms of intelligence that can converge and can converse with each other by exchanging entire portions of themselves. So it's a totally different kind of landscape. So I think maybe constraining my speculation to just like the next twenty to a hundred years because that's just a lot easier to talk about. I think that the positive outlook for AI systems is that say pre-AGI moment we will have just increased utility from more and more and more sophisticated agents and intelligences that can take on a great degree of task with increasing degrees of complexity and danger. So there's all kinds of things that are not ideal for humans to do or that humans are just not as good at that will have AI systems take on an increasing amount of things. I believe this is Hans Morovec who described this sort of like rising tide of machinery and intelligent systems taking on more and more and more tasks of different levels of complexity while humans kind of rise to the hilltops to take on things that are still too complex for AI systems to do. And eventually there's a transition change at which you get to an AI system or an ASI system that is able to do everything that any human can. And I think at that point that the really positive outlook becomes one where humanity and AI systems sort of co-evolve or merge in some way. And I think at that point it becomes about thinking about AI systems as kind of like a neural extension to ourselves. And so that I think is really mediated by brain computer interfaces. If we can gain brain computer interfaces first, we'll get there to a really great degree. To your point about epistemic agents and so on, I do think that there is very high utility in being able to use AI systems, not necessarily AI systems yet, but like the tooling that we have today is really good to be able to orient us towards better conversations, more orientators truth, more orientators knowledge and knowledge diffusion and so on. Sorry, that was a little bit long-winded. Not at all. That's exactly what we needed to hear and I'm inspired. And Allison is staying on that theme. First I want to introduce our fourth panel member, Allison Dutman. She is president of Forsyte Institute. Many of you are familiar with Forsyte. And first author on this new book, which is coming out very soon, these are bound proofs for the author. Allison is first author along with Mark Miller and third author is myself called Gaming the Future, Technologies for Intelligent Voluntary Cooperation. So if you take Mark's talk and sort of expanded by a factor of 100 or 1,000 in vision and detail, that's the book. Which Allison did all the heavy lifting on, I have to say. So Allison, we're on the topic of positive futures here. When you think back about all the tools we reviewed here in the book, what are you excited about in terms of tools that this crowd should know about for positive cooperation? Well, I mean, I think apart from the many that have mentioned, have been mentioned throughout the conference day, which, you know, I think you really did a fantastic job at curating such a broad variety. And I think that the main tools, I think that, yeah, we will pay attention to in the future years. I'm not saying that we have more ideas here really, but, you know, like from everything from so-bound token to dollars, impact certificates, quadratic funding. I thought like you really did a fantastic job first of all. I do think that there's a few more that have been also quite at home in the foresight community for a while. One of them obviously is prediction markets. And I think Robin Hansen, for example, who's a foresight senior fellow as well, actually started a prediction market at our 1999 member gathering, where people still send checks to our offices. And now we have this wonderful tool kit that crypto commerce enables, where we can have these like amazing systems that are decentralized, that are immutable, that are first jurisdictional, that are already existing, and we can already use them. And I think, you know, like we also collaborate very closely with the DSIDE community. And for those, for example, you can have replication markets as well, right? For example, you could predict on whether or not a study could be replicated or not and then use that as an indicator for which studies to replicate first. So I think there's a ton of potentials in prediction markets. And I think that that is still really like, yeah, I think it could be explored even more. And then there's obviously others that I think, in addition to the prediction markets, you know, we talk a little bit in the book about how we can move toward a smart contracting. And we say that usually when you talk about that, people are like, well, we cannot automate everything. We can't get all the way there. And I think that's true. Like I think that, you know, there's probably always bits that we can't write an automated contract and that's probably for the better. But I think we can build towards that with using things like split contracts, which is one thing, for example, that was also pioneered by Chip Mornigstah and a few of the IMAX folks and the foster community. And that's basically you have half of the contract, the pros, half of the contract be automated and you predetermine like an arbitration system basically afterwards, similar to Claros, what they are basically already doing that can basically come in when a certain part of the contract doesn't work. So I think we can gradually move towards a more automated world. I think maybe one of the ones that they haven't seen much at this event at least about is Dominant Assurance Contract, which were developed by Alex Tabarak much, much before the notion of smart contracts was even a thing. And it's basically similar to Kickstarter, which is an assurance contract. So, for example, like a public goods only gets funded if enough people want to fund it too. But the hunch here is that in addition, the first people that ship in, if it doesn't get funded, they get a reward. And so you incentivize early contributions and you make it more likely that these things actually do get funded because sometimes it's hard to get them off the start at first. And there's many, many more. We talk about a few of them in the book, but one other thing that I still want to draw people's attention to and it fits in what King Tong talked with a tech tree and also what Juan mentioned on AI is really the notion of crypto commerce is larger than just cryptocurrencies, obviously. And, you know, it relies a lot on the underlying cryptographic technologies. And there's a few things like federated learning that I think we should be paying much more attention to. Like federated learning would allow mutually distrusting parties to cooperate without actually sharing their data. So you can do like a computing on encrypted data. And so that I think it's really exciting because it means that we don't have to like have these centralized agents that like go above all the data, but we can actually like collaborate, for example, on even maybe even things as essential or as sensitive as like healthcare data in a decentralized way. And so we can avoid many of the, I think, you know, usually like data sweeping problems that we usually face. And so there's a bunch more. But, you know, I think why we wrote the focus just we need to point out to you guys that I think all of you are working on things that are so fundamental and important for the future. And I think that especially if once AI comes into the mix like crypto commerce has really the like potential to reshape the future that we have from like the bottom up, right? Like I think it's the meta tech tree of all the other tech trees that we need. And it's really exciting that we can now build the types of technologies and signs that was that it's usually interested in. And it's really wonderful decentralized way. So I'm just really happy that this community is coming together and very excited to be a tiny adjacent part. Great. Thank you, Allison. For those of you who want to read the book right now, it is on Substack. You just go to Substack, type in Foresight Institute. It will take you right to the book. It should be available on Amazon. I'm guessing in about a month, Allison, you cannot if that's about, if that sounds right, about a month. And for those of you who love this topic, Foresight's having a workshop in October, October 4th and 5th on this topic. You can go to our homepage and apply to attend in San Francisco, and we'd love, if you love these topics, that's the place to be. So quickly Mark, since we're short on time, I just wanted to ask you one thing we'll be covering in October are computer security technologies because none of this happens, like none of it happens without good computer security. Very briefly, can you just mention a small number of tools that this audience should be aware of in terms of funding the Commons projects that deserve participation? When I first got involved in computer security, there was a dominant paradigm, which we now label identity-based access control, which the dominant form is access control lists and all the mainstream commercial operating systems. Everyone that you've touched on a commercial product has been based on that access control paradigm. There was a historically another paradigm, capabilities, that we've now to make some distinctions of clarified with term object capabilities, that at the time I got involved was considered a discredited paradigm due to some misunderstandings. And since then we've succeeded at clarifying those misunderstandings, reviving the paradigm, it's getting increasing academic attention, but it's now getting increasing actual systems building. The only practical, high-performance, mechanically verified secure operating system, SEL4, is based on object capabilities. The entire agoric smart contracting stack is based on object capabilities. A lot of the cosmos infrastructure is based on object capabilities. And I know Juan is also a big fan of object capabilities, and I'll let him talk to what degree those concepts are incorporated into his systems. But the essence of object capabilities is this, is the transferable right, the transferable bearer right. And you can think of that as most familiar from object-oriented programs, where the safe, unforgible pointer to an object enables another object that holds that pointer to use the public interface of that object. And in using that interface, by invoking it, it can pass it as a parameter. And this simple notion of passing it as a parameter, one object can permit another object to invoke yet a third object. It turns out you can rebuild all of these, the computer security systems patterns in kind of this alternate way, starting from these very simple foundations. And in doing so, you can partition risk very effectively. So we'll never build bug-free systems. What we should be seeking to do is build systems that are much more resilient in the face of their own bugs. And we should also build systems in which the value that can be compromised in the face of any one bug is strictly minimized. And there's many techniques for doing that, but the primary, the core, is the principle of least authority. Mark, briefly. Okay, sorry. And so in any case, an object capabilities are very naturally expressed architectures and then on top of that, we build security patterns for further limiting and partitioning risk. Is there one project you'd point us at, one? I think there is. That's why I'm asking this question. Well, I would take a look at the hardened JavaScript that we've created as a supported sub-language within JavaScript. I've been on the ECMAScript committee, the committee that standardizes JavaScript since 2007, and I've gotten into the JavaScript standard. The mechanisms needed to support hardened JavaScript, which is still proceeding in committee, but it can now be used securely as a library. So I would point to that. I would point to the Eugorek stack that's built on top of that, running hardened JavaScript on blockchain, running smart contracts written in hardened JavaScript. And separately, I would also point that the SEL for operating system, which is the, as I mentioned, the only practical, mechanically verified, secure operating system. And if we want to move forward into any future that is safe in the face of artificial intelligence must already be safe in the face of cyber war. And SEL4 represents, for which I have no commercial affiliation. SEL4 represents the most important first step towards rebuilding our infrastructure in a way that is safe and will remain safe in the face of artificial intelligence attackers. Thank you, Mark. That was great. We are short on time, so I'm going to make this as brief as I can. This is sort of the wrap up in the conclusion on this session. First, I want to correct a confusion that I've heard yesterday and today again, which seems to be there's a confusion, probably not among people who are tight into the community, but I've heard it more than once now, where there seems to be this impression that if you build a future of individual choice with self sovereign participation, that somehow this world is going to be a world of selfish individuals, atomized, isolated, Marlboro men out riding their horses and smoking cigarettes on their frontier, right? And that somehow what we're doing is selfish. This is exactly the opposite of how I see it. And I hope we will all, when we run across this confusion, I hope we all reach out and correct it and say, no, in fact, the goal here is to enable voluntary cooperation and mutualism. You can't have that if you don't have voluntary choice at the lowest level, right? These are levels that interact. If you want voluntary, higher levels, it has to be voluntary choice from the bottom up, or you don't have that. There is another way to come up with a civilization, right? You don't have to have bottom up choice to have a civilization. You can have top down control. We see that in the world today. Those are real civilizations with real achievements. The question is, can we save the one we have here where we do have some individual choice, where it's based on individual choice? And as Juan was pointing out yesterday, our institutions are floundering today. We're in trouble, right? I mean, I've lived long enough to see them deteriorating. They're either dysfunctional or sometimes they're actually captured. The regulatory structures have been captured by the industries that they're supposed to be regulating. That's why we have lousy healthcare. We have instead sick care. People are making money off our illnesses. Okay, so that's my little rant there about the mutualism thing. Let's try to keep our eyes open and get rid of this idea that what we're doing is selfish in some way. It is very much not that way. So it sounded to me when Juan was doing the kickoff yesterday that he was kind of challenging us, if I understood correctly, to rebuild civilization from the bottom up because of our institutions being broken. And I have to say I want to join that call. I've seen them deteriorating. The only thing that gives me hope is Web 3, this community. And you have so much potential power with the tools you're building compared to the floundering we see in society at large, right? You can do this. You actually can do what Juan was calling you to do. Will it take time? Yes, it's going to take time. How long? About 40 years is my guess. Why 40 years? I've been around a long time, and I've seen how long it takes to do hard stuff. It takes a long time. And that's why if any of you are like giving 110%, you might want to rethink that, okay? Because you are not in a sprint. You are in a marathon. It's going to take a long time. It's going to be hard to do what you're doing, especially challenging things in physical space like Vida-Dow. Okay? They're trying to reform and make it so that we have health instead of a sick care system. This is a huge undertaking, and it's in physical space, not in bit space, right? It's in meat space. Things in meat space are harder. Why are they harder? Well, two reasons. One, atoms are very hard to control. They're just very difficult. Chemistry is awful. Biochemistry is worse. Biology is a mess. If you think software is bad, biology is a disaster. Okay? And we're made of meat, right? Remember that? We're made of meat. So if we want to be healthy, we have to get this meat thing under control. So that one reason is atoms are hard. What's the other reason? Well, when you do things outside the computer, out in the physical world, there's opposition. There are legacy institutions who are making money off the problem you're trying to fix. And they don't necessarily want that problem solved. And this is not because they're bad people. It's just because the system is set up to make money off something that is a problem, like illness. And it's very hard to change that, especially with the regulatory system being captured. So there will be opposition. So things that should be easy take longer. So things that should take 20 years take 40 years. Do we give up? No, we don't give up. We don't have a choice. We have to do this, right? So that's why you have to take this additive. It's a marathon, not a sprint. You got to pace yourself. It's a long time. It's going to take a long time to do these things. But it's totally worth it. And so my suggestion to all of us and to Juan in particular is I suggest we come back together 40 years from now in this room. And I think this room will still probably be here in 40 years. It's been here a long time, right? And see how we did, right? We have a long list of dreams and goals that I've heard at this conference. The one that's closest to my heart is Vita Dao, because the payoff for humanity is so huge. And let's see what Vita Dao can do in 40 years. It's a much harder problem than almost anything we've done before because human biology is so bad and there's all this infrastructure we're going to have to fight. But number one, they're very smart people and I hope you're all on the team either directly or indirectly. Number two, the tools that you guys are building are super powerful. And number three, our community as was pointed out yesterday has significant resources. And this is a cause that our community cares a lot about. So I think there's a very good chance that we can make major progress so that 40 years from now you guys will be back and you'll be in pretty damn good shape because Vita Dao did their job. And on that note, I hope to see you in 40 years right here and let's see how many of these dreams we can make come true. Thank you.