 And this subject of this panel is near and dear to my heart. For 20 years, my wife has been telling me that she doesn't understand what I do. And I'm really bad at explaining it. And what we need are really good narratives around issues of complexity, science, technology, and society. It is a truism, but nonetheless true, that the problems that face us today are really complex, are wicked, are very difficult to know. What cause and effect chains exist, how you push on the problem in one way and get something completely different that you didn't expect on the other end. And yet, the tools that we have for talking about these problems, whether it's climate change or immigration or tax reform or the internet and privacy, seem often very impoverished. And in desperation, I've found myself increasingly turning to narrative as something that can capture complexity in ways that sort of standard expository discussion and argumentation don't be able to seem to do very well. So that's what we want to explore today. And our three panelists are Vandana Singh, Dave Ryjewski, and Carl Schroeder. And I hate little capsule biographies read by people who don't know that people are introducing. So instead, although I do know Dave, I'm going to ask the panelists today to just for a minute or two talk about what they do and how they think about the connection between what they do and complex sociotechnical problems. So why don't we just start with you, Vandana? All right, well, my name is Vandana Singh. And I am a science fiction writer, but I'm also an associate professor at a small university, Framingham State University, where, among other things, I research climate change, science pedagogies. And I'm very, very interested in climate change as a science and also as a wicked problem. And I write about it. I stay up nights thinking about it. And so that's my big thing. Hi, I'm Dave Ryjewski. I run the science and technology innovation program at the Woodrow Wilson International Center for Scholars. So I get to play with all kinds of technologies. I do work on nanotech, synthetic biology, cognitive neuroscience, video games, citizen science. So I get to the point where, essentially, I get to play with those things that are indistinguishable from magic, as Arthur Clarke said. But there are also, I think it's an important point, Ed Tenor at Princeton said, that there are also the things that promote self-deception, precisely because they're so magical. And so these technologies are always skewing much more towards the benefits than the risks. They're skewing towards centralized control and not decentralized control. So you're inherently at a very dangerous fulcrum when you're dealing with these technologies. I don't believe there's actually a deficit of innovation. I think there's a deficit of futures. That our inability to think coherently about the future is the most dangerous thing that we're dealing with right now. And essentially, we're wasting the prefrontal cortex of our brains. I mean, we're essentially wired to do this and we don't do it much. And so we could talk a little bit more about, I've been involved with science fiction writers for 10 or 12 years for specific reasons we can discuss, but one of the issues for me is how do you create very compelling narratives about the future? I'm Carl Schrader, I'm a science fiction writer and foresight her, or futurist, we don't really have a good term for that. For the last 15 years or so, I've been working for clients like the Canadian government and army on expressing and communicating complex ideas, particularly about the future. Actually, a book I'd recommend about that is Dynamics and Action, whose author is escaping me right now, but the thesis of the book is that complex dynamical systems which, well, can only really be understood in terms of the context and history. When you write context and history, that's something we call a narrative. It's called a story. So the best way to actually analyze certain kinds of real world problems literally is narrative. I did my master's thesis on translating foresight findings into fiction as a way of communicating them in a concise zipped package. And I continue to work on that kind of exploration. Great, thanks. So I'm hoping to put your actual stories at the center of our conversation and I'll get to them in a minute. I wanted to start with asking Dave, who's the policy wonk and the odd man out in the group to talk a little bit about, to expand on your point about the value of fiction in addressing policy problems. Because as you say, when we're talking about these emerging technologies and how they are always expressed in terms of the wonderful things they do, usually with the deficit of imagination of possibilities. And then it turned over to people like me to figure out how to think about the policy frameworks around those. And obviously, you found that to be unsatisfactory and have therefore pulled in people like Mandana and Carl to help you. So can you talk a little bit about why you've done that and how it worked? Sure, I mean, I'll start with a depressing story. In 2000, I went around. And I interviewed all the major heads of the policy and planning offices in the government. Of course, we're free to talk now because they just left government, Health and Human Services, the State Department that had a premier policy and planning unit. Basically, I went through every agency. I asked them, how far ahead do you think? And they said, not very far. I said, what did you think you were going to do before you got into this job? I thought I was going to think about the future. And none of them did. And so I started to go agency by agency and said, who thinks about the future? I'm not talking about the thing that you submit to OMB, your plan, your strategic plan. It goes out four or five years, and you use essentially to maximize your budget for next year. Who actually thinks about the future? And I ended up in NASA. And a guy, I talked to a guy. He said, let me show you our 200-year plan. That's great. OK, so you got my attention. And at that point in time, NASA was run by Dan Golden. And Dan's senior advisor was a guy named Yojo Kondi, or Kondo, I think. Some of you, sci-fi writers know him as Eric Katani. So here's a guy that was running NASA, whose primary advisor was an astrophysicist and a sci-fi writer. And so we actually worked with NASA to start. We had a meeting where we actually brought in all kinds of science fiction writers with people from all over the government. Arthur Clark dropped in via satellite. We had Charles Sheffield, who's unfortunately passed away. I sent a TV crew out to do interviews with Greg Bear, Elizabeth Moon, Joe Haldeman. So incredible numbers of sci-fi writers. Just to think long term with government agencies, and it was an incredible exercise. But I couldn't sustain it. So the thing that I started to think about is how actually would you create an interface that would allow this to happen on a constant basis? Which I think is part of what you guys are doing here. Because I think it really is important. Because I think the narrative is important. Though the other thing I like, I want it from the narratives. I want a narrative where I can touch the button. And that's one of the things that got me into video games. So I want to be able to push on A and C that B doesn't happen. But not only do I want a story. I want a story with an algorithm behind it that would allow me to say, OK, I'm not going to understand complexity until I can play with it. And so I would love to be able to figure out my ideal world is can I actually engage the people that really can write great narratives? And put those together with the models and sort of the complexity behind these systems. So there's climate change, or intervening in pandemics, whatever it is. But I do believe that it's something that George Lakoff said. It's a cognitive linguist. That framing precedes policy. So one thing we can get to later is the question of who gets to write the algorithms and why we should trust them. So you guys in some sense are writing the algorithms. So let me take the liberty of quickly describing your two stories. And then maybe we can discuss them. But again, I want to focus on this theme of how you think about bringing a really complex and determined open system type problem into the discipline of a narrative in a way that respects the complexity of the problem. So two wonderful stories, if you haven't around them, please go do so right afterwards. Carl's story is called Degrees of Freedom. I've never done this before. It makes me nervous. So if I screw up, you'll just correct me, right? But it's a story. I'll silently judge you. Yeah, please. You're right. So it's a story about how indigenous people in Western Canada using some fascinating speculative decision support technologies work to undermine the power and influence of the centralized national government. And the narrative plays out through the conflicting roles and worldviews of a father and son, which provides a lot of the narrative tension. But complexity itself provides some of the narrative tension, the wickedness of the problems of equality and environment that Carl's concerned with. It's an explicit theme in the story. And these hybrid decision support social media technologies, they're portrayed as crucial for helping to manage the complexity. And in that way, there's a strong connection to Vandana's story, which is called Entanglement. And it's a succession of brief, thematically related narratives about individuals, in most cases, I'd say, kind of sad, lonely individuals, working in different ways in different parts of the world to try to address climate change. And through the story, we see a kind of growing, almost mysterious evidence that the separate narratives, as separate narratives usually do, have more explicit connection to that we only learn about, of course, near the end. As with degrees of freedom, we learn again about an intriguing speculative social networking technology, which in this case provides this kind of almost mysterious connection among the characters. So here, the complexity is portrayed differently. It's through the rich diversity of places, the environmental difficulties, the personal challenges behind each narrative. So what I want to start off with is what seems to me to be kind of an interesting tension here, especially regarding our previous panel, which is that you both find great hope in the capacity of, I guess this is even a positive hieroglyph, the capacity of these complex cognitive social decision support technologies and actually bringing people together and bringing societies together to have shared understandings that enable maybe better collective action. And so talk a little bit about your view of technology, especially perhaps in light of the complexities that we've already been discussing today. Well, a few years back I noticed something about science fiction and by extension the community that reads science fiction. And that is that we're perfectly happy to speculate about wild amazing advances in biology and nanotechnology and artificial intelligence and materials and space travel. The one thing we will not imagine is that we could improve the way we make decisions. It's a gigantic blind spot. And I wanted to explore that and look at that. And when I did, I discovered that in fact, we have already got amazing tools, technologies, and methodologies that do this, that improve the way we make decisions both collectively and in groups. But people don't know about it because they assume in advance that it can't be done. So all of this story does is trot some of those out. None of these are my ideas. Things like structured dialogic design. It's been around for ages. It was used in Cyprus. It's a governmental tool used by some of the First Nations bands in the US as their governmental modus operandi. I'm just putting out things that already exist. And I'm a little bit shocked that this is seen as innovative. Shocked. Well, in my story, I envision technology in a certain context. And in complex systems, I think context is just as important as the thing you're looking at. So if I may, and I know that many people in this room already know this, some better than I do, let me attempt to define what I mean by complexity. And then I'll weave in how I saw technology as part of it in my story. So a complex system is one with many interacting parts where the interactions are strong. The interactions are non-linear, typically, so that the behavior of the system as a whole cannot be easily, cannot, in fact, be obtained or be even studied to some extent without an understanding of the interactions. So it's another way of saying that the sum is greater than its parts. And such systems have interesting features, like feedback loops, tipping points, sometimes complex systems are chaotic. So the way I see it, and I come from an aggressively democratic nation, and where having arguments is a good thing, in fact, there's even a book called The Argumentative Indian. So the way I saw it is, all right, what's the context and what are the struggles that are playing out? And instead of having technology being developed in a top-down way, because technology has impact on people and on society and on lives, and it changes things. And sometimes it's not always in the interests of the people who use it, as we've been hearing during the course of the day. So what I wanted to do was invert that and violate the third forbidden H, as Neil talked about. And I wanted to see, what could happen if people developed a technology to serve a need through some kind of a community experience, instead of it being invented for you by Apple, for instance. And this need, in this case, is global climate, climate change, climate disruption. And so the way I was looking at it was that this is a wicked problem. The way I conceptualize a wicked problem is one that arises in a complex system. And we have interlocking complex systems here. We have the climate system, which is complex. We have societies, which are parts of the whole equation, which are complex. We have, we've been told we live in the Anthropocene, but some scholars are challenging that and saying, we live in the capitalocene. So, and essentially saying that climate change is the logical end result of runaway capitalism. So it's not so much a species thing, as Anthropocene suggests, as a particular kind of paradigm. And so what I wanted to look at was, what if you looked at technology as arising from people who are concerned about things, and when you look at global problems, when you look at wicked problems, you have to shed certain or perhaps dilute or put away certain aspects of your identity and put on other cloaks in a way. So for instance, it's less important whether you are American or from Serbia or from India when you're looking at a global problem, because nature doesn't give a damn about political boundaries. And so in a sense, in those situations, for those contexts, it makes sense to put on a kind of global citizens hat, and we don't know how to do that. We have things like the internet, we have Facebook and so on and so forth, but they haven't gone as far as I would like to see science fictionally in creating a global environmental consciousness. So in my story, among the two things, two ways that technology comes in. One is through a massive citizen science project on the Arctic Ocean, which is ground zero for climate change, because temperatures are rising there twice as fast as the globe on an average. And so there's a massive citizen science surveillance project that involves incidentally hacked drones, among other things. And then the other way that technology comes in is through a kind of internet-based device that connects people from different countries and different backgrounds through what I imagine is the community of strangers, people you don't know connecting to you in needed moments to make a difference. So sorry, I went on a little long, but I wanted to put it in the right context. So I wanna pursue this notion of aggressive democracy, which I think is really interesting. And Carl, even if I shouldn't be surprised, I thought the description of the process and technology of getting people in a room to make sure that they each understand one another's perspective, so that they can actually have a conversation where when they're nodding they really understand what the person on the other side of the table is saying. Seemed to address one aspect of a problem that I haven't heard discussed yet today, but which I think underlies so much of this, which is diversity of values and value conflict. And so I'm curious, it seemed to me that your stories took a different perspective. So, Vanda, you're pretty clear about where you stand and make that the narrative serve your value perspective. Whereas Carl, it seemed to me you're willing to be more agnostic and say something about the importance of values emerging from democratic discourse that's truly democratic. So I wonder if you could each talk about how you see your own values via these complex situations that you're trying to elucidate and the role of the story in addressing value dispute. And then that'll lead to my question to Dave, so you'll just have to be in suspense. Well, by deliberately choosing the Haida as the context, I was choosing I am not Haida. And therefore I deliberately took off the table the possibility that I could be representing this group in order to focus on what was on the table which was the method or methodologies involved. So the story is not about the advocacy of a particular group, it's about advocacy of certain methodologies. For instance, in the story there's a website called wegetit.com. I wanted it to be I agree.com but I think that's already taken. But it's a website where there are discussion forums but the only action that you can take is to agree. You can either agree or drop out of the discussion. And this is in deliberate contrast to the way the internet forum structure is set up right now which is apparently to foster disagreement. If you look at the way the internet acts, it often acts as a saint but if you look at the way the internet talks, it talks like a psychopath. Discussion forums are fundamentally broken in that they are disagreement generators. So one of the things I wanted is okay, could we just tweak that? And the story as a whole is one of the things I say is that there will be no Facebook for politics because politics is too wicked and complex a problem and too multifaceted but you can improve small things. And in the story hundreds of small things are improved simultaneously which gives the effect of improving politics. But one of the things you can improve is just the way that people agree about the fundamental meaning of words when they're discussing things on the internet. Structured dialogic design which is something I allude to developed by Aleko Kostakas and others is a workshop tool that does this. One of the things I wondered was could you take that workshop tool which works efficiently for up to about 60 people in a room and magnify it so that it'll work for a million? And if you only solve one little problem like that, you can have a magnified effect on people's ability to get along. And that was really all that I was trying to say. Well, that's saying a lot given the current politics around these wicked problems. So yeah, so Bonnet, can you address the question? Yeah, yeah, that, but I would like to distinguish between values and models because, you know, well, even in, and I would also like to say that you probably shouldn't judge my values by my stories necessarily because part of the wonderful fascination of writing and writing fiction, writing science fiction is that really anything is, anything can be set up to be interrogated including one's most dearly held assumptions about the world. And one thing that I've learned by being a scientist, my background's in theoretical particle physics is that there are, you know, ideologies are like models. They have limited usefulness. They have domains of validity. They are not a deity, you know. So, you know, I feel free to critique anything from capitalism to communism to whatever. In fact, I think we need new isms because we are in a new age where we are recognizing the complexity and interconnectedness of the world. And so the old Newtonian paradigm of the clockwork universe is one of the things I was playing with in my story that, and in fact, any good fiction is really an attempt to look at a situation in the real world. So for instance, I could write a dry treatise about the psychology and the geopolitics of a certain area in India and about a family feud involving inheritance and property and so on and so forth. And I could do that as a sociology project or an anthropology project, but then I could also write the Mahabharata, which is a great epic. And so that's what fiction does. It, because we cannot, because human beings and human systems are not simple systems. We are multivariable systems with very non-linear interactions. That's why we have literature. That's why we have arts because they are basically, they somehow condense the complexity of the problem into that one experience. And so when I look at values, and I propose them as a thought experiment or it's more like I'm looking at a model or a paradigm of the way of looking at the world and really anything is up to be kind of challenged and interrogated. And now that the Newtonian Clockwork universe, which we know does not exist, is being questioned. And now that we are recognizing that the way we live, everything that we do, including how we recreate, how we enjoy ourselves, there's a history behind it. Look at the Industrial Revolution. Nuclear families did not really exist before the Industrial Revolution. So the Newtonian paradigm of the Clockwork universe informs every aspect of our lives. And the great pity is that the universe is, in fact, not Newtonian. So what I like to imagine in my stories is what if we had societies or interactions between people or technologies that were based on a non-Newtonian, more realistic paradigm of the universe. And one of the things we learn again and again in different contexts in the universe, if you're doing quantum physics or in complex systems theory, is connection. Things are connected. And so that was the main thing pushing me. Of course, one of the similarities between ideologies and models is once someone falls in love with one, they tend to stick with it for the rest of their lives. Newton's hard to kill off, isn't he? So Dave, you're in the belly of the beast and these guys get to write their stories, but you're trying to actually influence policy and the narratives around policy. And a cliché is, of course, that one good anecdote is worth a thousand data points in Congress. But nonetheless, it seems like we haven't yet gotten very good at using narrative to influence the way we talk about these complex sociotechnical problems. So can you talk a little bit about your experiences and where you think we need to go if we're going to actually have narrative do work for us in places like Washington DC? So I'll give you an example that kind of builds off of the things that you were talking about in your story. About five years ago, I had this idea that one of the things that gets battled with in Washington is the budget. There is no policy, there's just a budget. The illusion that we have, the budget drives everything. So I had this idea, OK, what if I can get a million people to play with the budget? So we actually built a game, and it's built on a lot of the macroeconomic models that are used by the Congressional Budget Office, right, to inform the Congress. And we put it out there and the thing that was quite amazing is now there's over two million people that played it. And you enter the game through a narrative. We ask you to take a value stand. I really want to do energy independence. I believe in a better social safety net, whatever it is. And now you build the budget, right, go, right? And you've got access to 80, 100 policies, right? And of course, it's a big data machine. We had a huge arguments about, OK, how do we protect the people that are putting data in? Do we let them share data about their demographics, right, because there's a certain about a value we get on the back end. Anyway, the short story is it goes viral. It's in thousands of schools across America. All of a sudden, people are playing with a relatively complicated system that nobody thought they could ever understand, right? We get 40,000 emails from people saying, this is the first time I ever understood what the alternative to a minimum tax was, right? So people get to play with all these things that they always wanted to do. Let's get rid of foreign aid assumptions, right? Just test your assumptions. I don't like the EPA, cut the EPA's budget in half. Of course, nothing happens in the big picture, but they get to play it. They can crash this, a flight simulator. And we're collecting all the data in the back end. Now the wonderful thing that happens, right, is that if you collect a million data points, you actually begin to see that people start to act almost against their own self-interest. So one of the things we found was older people were willing to raise a social security age threshold. The wealthier people were willing to pay more for prescription drugs, right? So all of a sudden, when they were put in a situation where they had to make really severe trade-offs, right? How did they get put in those situations? Was it stripped by narrative that then caused them to make choices? Yeah, they had to make choices. So I think one of the things we're working now on a global dynamic simulation where you play with the entire global energy system. You get to control how you produce energy. Set carbon tax. What would you do? What if you had millions of people playing this? We have another game working on the Arctic, actually. What happens when the ice melts, right? And all of a sudden mineral rights opened up and fish rights, right? How are we going to sort of essentially, and that's a soft sort of soft governance system up there. So I think the thing that's interesting to me about these stories is that we have the capacity, technologically, to give these conflict systems to millions of people. But we can't give them to the appropriation subcommittee chairs. No. And that's what we have to do. Well, we have to do that. But I think the interesting thing was that it allowed us, as the data came back, we worked with American public media in a bunch of, it allowed us to create an alternative narrative, right? It was different from the Talking Heads in Washington. So we actually had journalists going on to interview. It's also different from all the interest groups that don't engage in these negotiations there in their own silos, right? And so the interesting thing was this alternative narrative actually emerged from people that were playing the game. The other surprising thing that this is kind of, it's not so startling, but it's scary. I'll tell you, I went to a party. I met somebody from the Congressional Budget Office. I told him, we developed this game. And they said, we play it all the time. So I said, why do you guys play it? I mean, it's based on your models. He said, but we never see the whole picture. Well, selfishly, I could continue to probe. But I think we should see if the people have questions starting to move back. What's that game all? A budget hero. A budget hero, yeah. Probably you heard in Europe, we launched in 2000 a big program, knowledge, economy, and society, the most vibrant one in the world. And today, we don't talk about it anymore. We come to America to learn something. My question is a following. I'll give you a real story this summer in Switzerland. You know about autism. It will be a march this coming Sunday. And until a recent study, everybody thought that autism is due for 80% to genetics. And it engaged a lot of resources. And now, a later study confirmed is less than 50%. Or this is almost a paradigm shift according to the experts. Now, we have so many information which mobilized resources, financial resources, political resources, like the case of autism. And we don't know how to handle it. How would you address this issue? We try to think of it like conduct more experiments, get people more involved, and so on. But how could we correct? Because we multiply those mistakes now. Autism is a huge social cost, human cost, at least in Switzerland. I have my grandson. So how would you address this issue? We heard very much in many other cases. But everything related to human condition is costly. How do we handle this? So is the bigger question here, how do you use narrative to basically change the framings around complex issues that are rooted in bad ideas? Is that the basic idea? Yeah, there are, again, ways of, I'm blanking on a particular morphological analysis, for instance. You can use that to explore very complex solution spaces for complex problems. And I'm sure that Europe does that. But you can, with tools like the budget hero, and in my story, I have seen Canada, you can let people literally play through the possibilities. You don't assume that the current studies on autism are correct, but you have to do a risk analysis. And say, right now, right here, on the basis of what we know, we have to make an investment. You know, it might turn out later that you were wrong, and that the science was wrong. But you have no other option than to do that. Sometimes it will just, you will have gone down a blind alley. But if you went down that blind alley, you know, using all the correct steps, basically, you're blameless in doing that. Just sometimes the real world cannot be anticipated. But there is a space, of course, for evidence-based policy, and that's a different conversation. But my other hobby horse right now is getting away from ideology to evidence-based. And I won't fall down that rabbit hole right now, but it's a big one. When you were doing your introduction, you mentioned synthetic biology and nanotechnology, and you put them in the context of technologies, futures, magic, that lends more to centralization and decentralization, but those are very active hacker spaces, both of them. And in your story, in particular, I get the feeling that there was a hacking of government going on. I mean, I'm not faulting the tools, but this was a serious attempt to hack the governmental body. And I'm curious, how do you provide a solution when people are out there making their own solutions at the same time? There is no central solution to a lot of these problems because they have many solutions that are being attacked simultaneously by small groups, or sometimes not so small groups. I'm not necessarily an advocate of the situation that I'm describing in the story, first of all, because what I'm describing is government becoming more of a marketplace, where there are literally alternative governments that you can choose to ally yourself with. And I am a believer in the necessity of an ongoing state apparatus, but what I was saying basically was that there's going to be a negotiation and a conversation is gonna be forced upon government by technology and technologists, and they're gonna have to be ready for it. I think we work with bio-hackers, these are people that have more and more capacity. In fact, I have the capacity in my office not to swab DNA and amplify it and analyze it for $700. So one of the issues is how do you actually allow these people to innovate? Because there could be a lot of, but how do you allow them to do that in a safe way, in a responsible way? So one of the programs we came up with is called Ask a Biosafety Officer, and basically we went to the universities where they have biosafety officers and we said, would you be willing to volunteer time to help these folks? And we set up an anonymized website so that the DIY bio folks can actually pose a question. And that's sort of rooted to the various universities, to the professionals, and so it's an attempt to kind of take this world of government, which is hierarchically controlled, we know the rules, we know the regs, closed IP, and how do you interface that with a world that's network based, where there's open sharing, there's a gift economy, it's a completely different world. So the question just come in, and they're ferreted out and the questions come back and there's sort of a whole archive of answers. The problem we had was we tried for eight months to get this launched and nobody would insure it. It was an interesting question about how do you operate in the space, right? So we ended up having to go to Lloyds of London. Lloyds of London to get to, and they had to figure out how would I insure these people in the university, because the university was not gonna insure these folks if they're talking to garage biologists. So we cost us five grand a year to insure the people in the universities. But that was kind of a wicked problem because it wasn't obvious how we were actually gonna get this whole system to work because it was built outside of the system, right? I'm glad Lloyds exists for that reason. I'll be out of the question. Yes, this question is for Dave. Since you raised the issue of future studies, since science and technology are dominant forces in society and research and innovation are inherently future oriented, yet how many PhD programs do we have in future studies? I posited that very few because when I got interested in future studies, I couldn't find just one graduate school in the University of Houston led by Peter Bishop and Andy Hines and a PhD program at University of Hawaii and one in Australia led by Richard Slaughter but one in Canada now in Toronto. Why don't we have PhD program in future studies? Because of the National Academy of Sciences and similar organizations that create including the universities themselves that create a standard for what counts and I don't know what kind of fight they had in Toronto. It's a design program in a design school. That was how it happened. I can tell you that we're fighting similar battles at a university that is very open to innovation. The problem is really among the faculty. It's not the administration. It's a hard time getting a committee together because I want to pursue future study as an inquiry and they don't want to touch it. Yeah, this is a significant obstacle to taking this discussion to the next step in terms of how do you mainstream it and what we consider to be legitimate valid ways of intellectual expression in the academy of expressing ourselves intellectually in the academy. In 1977, the National Science Foundation, NSF had a program called RAN, Research Applied to National Needs. And the RAN program actually did a large report on trying to think about how you would create future studies, a discipline. Because that was the issue. It never had any scientific legitimacy. And so there was an attempt in the 70s to actually create a NSF to see whether they could raise this up to the point where it became recognized as a discipline, which gets you over some of the resistance in universities, but not completely. And tell everyone what happened to RAN. It's gone. Well, no, I think it's still in the book. It's not in the law. It might be in the law. It might be in the law. Part of the problem with that era was that we all thought that future studies should be about prediction. And that was kind of the killer. Now what you have is schools and practitioners who are focusing on the issue of uncertainty itself. So rather than the futurists, I've been thinking of calling myself an ambiguous. Well, and this gets to Vandan's point about the difficulty of getting past the idea of a Newtonian Cartesian world. It is not about the deterministic path to a future. It is about exploring the spaces. And thank goodness we have wonderful narrative writers to help us do it, which would be a great last thing to say, I suppose. Do we have any time left? So do you have a quick question? The mention of Lloyds of London triggered something in me, a memory. And that is back in the 1980s, Lloyds of London was doing the only long-term research into climate change because of the insurance risk. So I wondered, what other unusual organizations have you come across that is doing similar long-term, planning long-term strategy if our government agencies were having such difficulty with it a few years ago? The insurance folks are great, especially the reinserted folks because they usually pick up the dregs that nobody will insure. So when Nanotech started out, I mean it was Swiss re-insurance, Munich re-insurance, Allianz, Lloyds, because they couldn't monetize the risks. And that makes them very, very uncomfortable. So, and they've obviously climate changed. So I've done a lot of work with, there's an emerging risk unit at Lloyds, and that's their only job. And their job is to inform the entire insurance industry. But there's not a lot of, I mean, one of the things when I spent six years in the White House, I had this vision that I would walk down the hall and there would be an office, there'd be a plaque on the wall that said the U.S. Department of Unintended Consequences. We fail, so you don't have to. How many people would it take just to think, I'm talking about the government, our government as a whole, could we afford a dozen people just to think through the unintended consequences potentially of our policies? No, we can't do that, so. Huh? Yeah, Andy's, yeah. But that's a rare, yeah. So I wanna indulge myself with one last question of the authors, which is, I had a sense that perhaps in both stories of a bit of regret and that the world is so complex. And I was particularly taken by the fact that you both had central characters who were indigenous, and that obviously symbolizes all sorts of things in a non-indigenous reader about simplicity and harmony. So I'm curious about whether or not you have each fully embraced, spiritually embraced the notion of complexity and indeterminacy, or if there's really a romantic core in your ambition that wishes we could get back to a simpler world. No. No, no, I really, as I say, I like being an ambiguous, I like the complexity of all this stuff. And complexity does not mean chaos necessarily. I mean, the human brain and the human body are incredibly complex, but they work. So I mean, lately I've been exploring cybernetics, which is something that people won't study anymore, but it's all about homeostasis in complex systems. And homeostasis, and again, a word that just doesn't get used anymore, or studied, and it's a very important concept for us right now. And it's quite compatible with complexity. Bondi, would you like the last word here? Similarly, I think I actually rejoice in complexity because it's in complexity, I think, that we have, or in complex systems, are also systems of great hope. And I'm going to end with what a student of mine once told me. This was when I was first learning how to teach climate change, and when I started to do it reasonably well, I realized that all my students would get depressed. But then, I added a study of complexity to my story about climate, and to explain some of the whys and wherefores. And I remember one student looking unusually chipper after a series of lectures, and I asked her why? And she said that she thinks she has hope because after all, she said, well, the global climate system is complex, but so are human societies, as I'd been saying, but what she had realized was that just as complex systems have tipping points, so do sociological systems. And so that she realized that although she was just one little cog in a machine, that if you have the right parameters, and even a small shift in the right parameters can change human behavior, then why not be hopeful? Because she still had a role that she could usefully play in making the future that is to come. So I just want to end with that story. Thank you. We'll end with that. Thank you.