 The fact that we don't see things eye to eye, the fact that you interpret the world differently than I do, is not a drawback, it's actually a feature of our world. And the only way we're going to solve our global problems together is if we can accept each other's mutual frames and try to find in good faith a way to accommodate them so that we can solve our problems and integrate them for better decision making. This is Rob Johnson, President of the Institute for New Economic Thinking. I'm here today with Kenneth Cookeye to discuss his new book, Framers, Human Advantage in an Age of Technology and Turmoil. Thanks for joining me today, Ken. Absolutely, great to be here. Ken, how would I say? You're very familiar with our, what we might call it, the economics terrain. I know you're an associated editor at The Economist and involved with Chatham House and various other institutions. So I really look forward to seeing this, what I might say, relationship between AI, human beings in the context of economic challenges. I think our listeners are going to have a very nice experience here. Ken, I'm quite curious. You've got the new book. How does it relate to big data? And more importantly, at the essence, what inspired you to write this on planet Earth at this time? That's such a good question. And you're right to actually point to the earlier book, Big Data. It wouldn't be so obvious, I think, for a lot of people to realize that here there's a book about cognitive psychology, a book about liberalism, a book about business and business strategy, and that it relates to an earlier book about data, but it certainly does. And the reason why is that when Big Data came out in 2013 and it hit the New York Times bestseller list, which was brilliant and we're very honored that it did, and it made a nice contribution to how people thought about machine learning and artificial intelligence, because, of course, Big Data is sort of shorthand in the valley for machine learning, and it certainly was at the time. We were sort of brandished the evangelists of AI and the evangelists of Big Data and the cheerleaders of it. And I willingly accepted that title because I felt that there was so much about artificial intelligence to cheer. And I still do think that way. But we were unfairly tarnished as well. And we were unfairly tarnished because people said, well, these guys are just talking about correlations and giving up on causality. And they have this view that you can just trust the data, and that's enough, but of course, you need a model for data, and that model's really important, and you can't just trust correlations and give up on causality. And we said, we met them halfway, if you will. We do feel that we're in a world, particularly in machine learning, validates this, that the correlation often is good enough, although, of course, causality is still the gold standard. But be that as it may, we always presumed that the data would exist in a model. We never thought that they wouldn't, and people sort of ascribed and sort of slung arrows at us in that way. So to fast forward, we were listening to the criticisms, but at the same time, we were watching what was going on in terms of artificial intelligence. Just take off like rapeseed and get improvements that were faster than anyone had ever thought was going to be possible, and be adopted more commonly than we thought was going to be even quicker, and we were the optimists of it, while at the same time, we saw populism and creeping authoritarianism, and even a winnowing of the public sphere because of cancel culture. And we were nervous about both of these trends melding together, where we have AI sort of being endowed with too much authority, too much power, and the human being diminished. And we said, me and Victor, and Francis as well, said, no, no, no. Before there's a model, there's a mental model. And so we need to actually focus not on what humans do poorly, like Dan O'Connor and O'Connor biases, we need to focus on what humans do really, really well, which is they generate mental models, they apply mental models, they think of the world with a simulation in mind, and then by doing so, and if they're good at it, they can actually change the world. So it bends towards their will, and we should celebrate this and we need to get better at it if we're going to tackle our biggest challenges. So from that sort of seedling of big data and the success of it, we focused our minds on taking the story to its next increment, which isn't, if you will, AI, we take as given that AI will be as transformative and positive, as we believe it to be, as it will be. But we say, no, no, no, the point is not to focus on the AI. The point is to double down on human beings and get better at that. Yeah, yeah, well, you know, many of the, I'll be a little bit technical, but many of the philosophers who I talked to about economics and economic history, economic history of thought, and dynamics, talk about epistemological or ontological uncertainty ontological mean there are unknown unknowns both in the realm of outcomes and in the realm of probability assignment. Whereas the epistemologist, I'm using a silly metaphor, but God can see the model, but it's just too complex. And all we need is more computing power, and then it will come into focus. And my sense is there's some ontological uncertainty present. You know, we have gone through this over the years in macroeconomics, rational expectations, somehow pretends that past his prologue. I have a lot of friends in the insurance industry. And they say, climate change is scary, because we don't have actual aerial tables about what this looks like. So we got to evolve the model by learning. And that's exactly right. Let me pick up on that, because it's just so essential. People in the climate change environment, look at the data, and they say, Hey, climate change is happening, look, the temperature is rising. But then there's people, statisticians often who actually know something about data, they look at it and this and they're the naysayers of the climate change deniers. And they say, they look at it and they say, the data just doesn't do what you say it does just because it's going up doesn't mean it's not going to go down. Their model is different. So how do you square these two things? And the and the dirty little secret in the biz is that the pure statistician approach isn't incorrect. It is true that if you simply look at rising temperatures as your data point, that does not actually tell you anything about causality. And it doesn't tell you that humans were responsible for that. But what you need is a model. And what you need in particular, is a counterfactual. And so actually in our book, we open up a chapter on counterfactuals by talking about two incredible women, right? And it's relevant that interestingly, it should be women, right? Eunice Funn, who has been really forgotten in history as the woman who before John Tydale of the Royal Society, was the scientist who made the link the causal link between, excuse me, the, it was pure causality link here of carbonic gas, carbon dioxide, and rising atmospheric temperatures that when you the sun rays hits it, that actually it gets hotter than with natural air, and it takes a longer time to cool down. So it's a great win there that you can actually see the mechanism that if you have European factories that are belching out factory smoke in the dark satanic mills, that yes, in fact, you're if you increase the carbonic acid content of atmospheric air, earth is going to heat up at some point. Now, what then happens is the baton got, oh, I should give a date for that that's 1850s. Okay, so the baton gets passed in the 1980s to Eunice Funn at Berkeley, and am I previously MIT, who was working for NASA under Jim Henson, right, the fellow who came up with the famous testimony, right, before Congress that basically said there is such a thing as climate change, and every scenario from from middling to okay to dire is bad, and all of them are bad, all the scenarios are bad. But what she did is she applied the counterfactual to say, What if you have earth without humans? And if you have earth with humans, what is the difference of the natural rate of atmospheric temperature change? And when you do it that way, of course, you cannot run the experiment, because there's only one earth, you need to run a counterfactual model, you need to create a simulation of it. And that idea of the counterfactual is what's so important. And so one of the features of a mental model or a frame, as we call it in the book, is not only is counterfactuals, it's not only causality, it's not only counterfactuals, it's also constraints. But here we can focus on just the role of counterfactuals. And that's how you get you need the mental model to understand climate change and to identify the role of humans as responsible for climate change. Yeah, when I used to work as a financial investor, I often would teach at schools or give lectures at schools. And I had a metaphor that I use, which is the only thing you can observe is the price. But behind it, there is a pendulum that swings back and forth the ball. And there's a fulcrum. But the only thing you see is the ball. And as an investor, you've got to figure out is this mean reverting, meaning when the pendulum swings out, it will return back to the balance point. Or is the fulcrum drifting from structural change. So when you see it out there, if you take what you might call the other side for the return to balance, you're going to get rolled over. So you had to get into the structural dynamics, the instability of models, in order to draw proper inference about what's taking place. And of course, it's just I try to make a simple metaphor that people could use your mind say latch on to the the dimensionalities that were in play. And you're you've taken it to a much more sophisticated place than I would be able to both as a seer and as a writer. But but I thought that kind of window kind of creates a bridge into the kind of things you're working to illuminate without a doubt. I mean, that's of course, behind iNet, in general, as an institution in which it understands that the world is more complex than we think. It's more adaptive that we think that there's an inherent instability that we think. And we make this point in the book as well. In fact, one of the more delightful vignettes that we point to is Andrew Lowe's work, Andrew Lowe of MIT, the economist who is trying to reframe, if you will, economics away from the mental model of 19th century physics of a world of fluid dynamics and equilibrium to a world of biology, which focuses on a mental model of evolution and growth and change. And when you do that, suddenly, what it means is that we're not as economists or as financiers, or as even business people or people acting agents in the world, we're not looking at an entity that has these sort of static rules of which we're trying to divine what they are and then apply them, that the actual environment that we're in that we're living in itself is unstable, itself is changing and dynamic. And therefore, everything we learn and then when we make an action changes the underlying environment and state that it's in, and we need to learn and change again. Now, if you take that mental model of sort of not stock but flow, for so to speak, when it comes to thinking about the economy and then apply it to business, you can see that one reason why some businesses do incredibly well and others less well, are those businesses that are understanding and have a mental model, the inherent change of the environment that they're operating in, where there's constant cycles of turnaround and that what they did yesterday has not zero, but almost zero validity for today. A lot of companies that don't operate on that speed and that on that clock, if you will, are going to fail or even if they try to, they don't make those right decisions. Now, places like Amazon and others who are able to collect data and have a data advantage are ostensibly better off to actually learn faster, can also hopefully have the right sort of environment, business environment, that they make a mistake they can respond and rebound more quickly and better to actually course correct, shape shift, meet the market where it is and succeed. But it's not a given because before it's a strategy, it has to be a mental model and it's hard for it to be something that you just simply read about in the book and learn. It's got to be sort of in your DNA. You've got to be like, it's got to be baked in rather than sprinkled on top. What I find fascinating is I was reading through your book. I want to kind of come back to the bridge to the big data book. There is a sense that I experience that people sometimes think everybody's anxious, the world's very uncertain. So big data is sort of the thing that can make us all feel calmer. Or the economist who pretends to be able to see the future even though he can't, which we might call a demagogue, is reassuring until his false projections are unmasked. What I'm getting at is that in your book really gets into this. Emotion is very present here. The yearning for certainty can, what you might call, make you susceptible to mirages. So I'm looking at a book and I think I remember, yeah, I read this, Will I Am? The musician, an entrepreneur gave you and endorsed me. He said, a great book filled with fresh perspective to help us during the rise of AI so we can usher in the age of humanity. So I'm back at the inspiration out of big data and into this realm, the role of radical uncertainty and the role of emotion. Exactly. So it is strange. What is he talking about? Why is Will I Am endorsing our book? And why is he talking about the age of humanity and the age of AI? And the reason why is that we start from talking about AI, from and take as a given that AI will be as transformative and as positive and beneficial as we want it to be. So we're the optimists of AI, we're not the naysayers of AI. However, we believe that that's not where the focus needs to be. It's because AI, although it can do great things for us and hope that it does, can't do something very fundamental that humans can do in that frame, that is to say generate mental models, apply mental models and reinvent new mental models if the old models don't work. And the reason why is because the very components that make a frame useful are the very things that artificial intelligence cannot do. In particular, causality. AI has no understanding of it. Humans are extremely good at it. In fact, you might even say that the flaw of humans is that we're so good at it, we even see causality where it doesn't really exist. Yeah, we project it onto the scene. But that's not such a bad thing in them itself. And the reason why is is because it presumes that the world is an understandable place. It's a predictable place and a repeatable place. And that with our intellect, we can understand some of these features of it and then take a causal template of how the world works and apply it to other circumstances. So we can make abstractions based on that causality. The second thing, and of course, artificial intelligence cannot do that. In fact, if you make one small adjustment, it was an example, if you were to play chess and you were to take away some squares that just couldn't be played, the computer would fall down completely where the human being would simply adapt. And the reason why is we have a frame of the game of chess. And with that model, we can easily adapt it and change it and make small little side cuts to it and still apply it in another way of thinking of a causal frame is I see butter melting on a stove. I now can tell you all about what might happen if I put zinc in a furnace, right? Artificial intelligence simply cannot do that because it cannot make abstractions, it can't generalize it, it can't take that representation and apply it to something new. It has to relearn it all. Like an animal has to relearn everything from scratch again. Second thing is counterfactuals, right? It's to say a counterfactual is a what if question. It's not the world that is, it's the world that could be. The whole point of artificial intelligence, the best technique that we have in deep learning, but also the others like reinforcement learning and others is that it learns from a large body of data. It needs actually gargantuan amounts of data to learn to overcome the fact that I can't render abstractions. The point about human beings is that we don't have the information we invented because we can actually use our counterfactual thinking to imagine a world that isn't to come up with data that we don't have or experiences that we haven't observed and make decisions based on. So for example, how do you go to the moon? How do you relight an engine in the middle of space in which there's no atmosphere? There's no oxygen? Well, we did that not because we'd ever done it before we did an experiment. We could do experiments on Earth, but the point is that before we experiment on Earth, we'd come up with a mental model and then we render it. And then the third is the constraints. We impose constraints on meaningful and the right constraints that are right for the time in the given circumstances. And we're not great at it, but we're pretty good at it. And because of that, we can do things well. We as humanity sometimes flounder, but often as humanity, we exceed ourselves and do great things. A computer can give you counterfactual constraints. Oh, yes, it can. It can give you half a trillion of them in 30 seconds, right? But the point is it can't give you the meaningful ones in time. So for that reason, although we're optimistic about artificial intelligence, we are also vaulting this human capability of framing. And this is coming at the time where on one side, we have what we call the hyperrationalists, the people in Silicon Valley and elsewhere, who say human beings have such problems with their decision making because of their content biases, because the data is biased. That in fact, what we need to do, well, we'll leave the data biased for a moment. Let me put that aside. Because human being just human decision making is biased and has limitations in it, that what we need to do is hand off some of these decisions to the machine to make it fair, to make it better, such as, you know, loan applications that don't rely on a white loan officer judging a black applicant, but can simply just look at the data. Now, there are ways in which that's the right answer, but in extremists, it's the wrong answer because you want human beings who have mental models to it. On the other side are the emotionalists, as you said and the emotionalists are the populists. They're almost like, we're so in man who don't want reflection, rationality, Cartesian facts and logic, because it's inauthentic. What we want is the soul's expression of itself face the universe. And this is the world of Bolsonaro and Trump and maybe Boris Johnson in Britain, if you're going to be ungenerous as one should be and say that this is the world in which, you know, that the instincts of one's humanity is the legitimacy that one needs to make decisions. So you can shake people's hands at a COVID hospital. And lo and behold, you're on a ventilator three weeks later. That's so we wanted something that would sit in the middle of that to say, the hyper rationalists don't have the answer. It's not a world of ice cold algorithms or should it be. The emotionalists don't have an answer. We shouldn't rely on populist simplifications to a complex world. Instead, what we need to do is understand our unique ability as human beings to become good frameless, to get better at working within a frame or reframing when we need to in order to solve our problems. Yeah, you know, in my own life, after finance, I've worked finance and politics, I worked in the music and film realm for a while. And a lot of my friends who come and watch things at Inet from that realm say to me, what are you doing with all these experts? They're more emotionless. And what they say is, I said, well, you know, both left and right brains are necessary or what have you as analogies. But they come back to me. And it's not that they don't think that the right brain, or excuse me, that they think the right brain is superior because it's heartfelt. It's they think that experts are not doing analysis, they're doing marketing for power. And when they're doing marketing for power, they are becoming cold hearted and personally ambitious and not providing for the public good. This I hear this over and over from people that work, particularly in film, that the, which are my called the heart minded, what you call the emotionalists are anchored to a more loving, if you will, process. And the cold analysis is subject to corruption. I don't I think that's too, too simplistic. And you've kind of created both the yin and yang here on both sides of this. But but it is what I hear in criticism of economics. I don't totally disagree with it. And I think a part of it is the message, part of it is the ground truth, part of it is the messaging. I think that let me take a step back and speak from personal experience. When Ted came around, I was an early sort of Tedster, I was an early sort of adherent to Ted. But in a but not in the way that I think a lot of people would have liked because I thought I was so angry at it and repugnant. Now, there was Ted before Chris Anderson in the 90s, which was a small little sort of sect of interested people doing extraordinary things that pay a lot of money to do and then it got sold, of course, to Chris Anderson and then became they've then put their videos online. And I really disliked it at the outset, because it was sort of like pop academia. It was taking what I thought at the time, some of the most scintillating minds on planet Earth and forcing them to speak in 15 minutes about their expertise in a way that I thought was that did violence to it because it was so it was such a simplification. I thought the talks actually were pretty good. But what I really disliked was the chatter afterwards amid the coffees by people who thought they knew all about particle physics, because they heard a 15 minute Ted talk, and that they had clear form license to challenge the world's foremost physicists about his ideas, because they clearly understood the alpha omega of it because they sat for 15 minutes uninterrupted, not texting on their phone. He did it. And I've got 180 degrees. I've actually I think that actually is they've done a brilliant job because they've done. I think there is a great value in the crystallization of ideas by great minds. Absolutely. But the second thing that they've done, which is which is just as important maybe more important is that they meld at their best the analytical right hemisphere with the creative emotional left hemisphere. And suddenly, I think the ideas stick better. It's a great way of communicating. In fact, it's so important that one of the things I do with the economist is I run and affect the bed page, if you will, something called by invitation. And I get our contributors not to simply write their ideas. Now the ideas, big ideas. But just I say, tell me, why is it that you're writing it? I tell you, what's your story in this? Why you give me your credibility? Say, I want to hear about the first person, what you have done. Right. And I want I also want to find a little emotional levers because the economist is so analytical and we're accused of being dry. I don't think we're dry whatsoever. I think we're since late. However, we're accused of being dry because we are so rational and analytical that I want to use this technique of bringing in emotion and finding a balance balance to it and getting these ideas to stick. And if we can do that successfully, we can have more of an impact in the world. And it's about having an impact. So I agree with those people who criticize economics and even some of the economists, as Julia Bender called it in the 1930s, the treason of the clerks, like the cheap the treason of the scholars who exactly who who have sort of given up on their integrity. In that case, it's about playing a role in policy and leading the world of ideas. But what if Julian Bender was alive today, he would point to his scholars who are giving talks of Goldman Sachs for, you know, for six figure speaking fees and giving up on their integrity of how influence in the world, but being unblemished with lucre. The rise of Trump was in part because you could look at the sort of governing class, the elites for lack of a better term in American politics, and he was able to say, hey, they're all on the take. And they didn't have a response to that. I think that's too bad because I've got nothing against, you know, you know, talks up to investment banks, in fact, I think the world's the better place if you have these porous ideas that go from academia to investors and vice versa. In fact, the point about the book is ultimately reinterpreting liberalism through the lens of cognitive science and arriving at pluralism, but a cognitive pluralism and pluralism does not mean that we it's not that we all agree with something that we're all open minded. It's that we can allow differences and different ideas to clash. And in that tension, we can funnel and challenge that channel that tension. Yes, productively to arrive at a better place. How say moving towards the latter parts of your book. I was reminded as I was reading the last chapter of it out of very human episodes, some people I know well, we're consulting with the Silicon Valley firm about their workforce, racial diversity, gender diversity and what have you. At the time when this firm was creating AI algorithms to monitor people and detect who might become a criminal. So it was what you might call an early warning system designed to protect society by catching ahead of the curve who should be watched. So we go in the old westerns had them off at the past. So the crime never gets committed. The injury never is incurred. But what happened was that this wasn't a human improvisational thing. This was a database created by white people. And in the algorithm, all kinds of probably unconscious, not mean intent, but unconscious triggers. When the algorithm was tried, it encouraged law enforcement agencies to essentially hound and track black people. And what was interesting was my friends who were working on this could see the demoralization of the black employees and they're considering leaving and they're, you know, very vocal to these consultants. And the response was to actually have those people join the design team to inject that broader sensitivity and humanity. So you're not I'm trying to relate to your book. You're not giving up the value of AI, but you're humanizing the AI in a way that makes it serve mankind better. And I thought it was a fantastic experience. And as your last chapter, I think it's called vigilance. What? Jump off from there and tell me a little bit about what is your recommendation to all of us in light, you know, at the pinnacle of this book in that realm. Yeah. So that's a beautiful story that you shared. And I think that is also shows a lot of wise management, shows a lot of patience and decency among the employees as well, because they could have they could have left. They could have petitioned management. Some did. Yeah. To discontinue doing this. I think both of those answers would have been insufficient, although I can appreciate why people would embrace them. Far better is to engage and to say, OK, this is a problem. How can we fix this? What would be the solution? What role can I play? And as you pointed out, bringing them into the design team is exactly right. It brings their frame, their diverse way of looking at the world into the product design and therefore into the outcome that's going to be driven from it. Because of course the question is, well, what was wrong in it? And we should be sort of very sort of distilling in terms of where the problem is. Was the problem that we were actually trying to apply an algorithm to make a prediction of crime? Maybe not, because of course we really want to use algorithms to predict things like cancer just as Amazon likes to have algorithms predict what we're going to purchase. So we become it's a lot easier for us to find new books or to listen to music if we've got these predictive algorithms. So that's not the problem. Was it the algorithm itself? In fact, you would say, oh, there was a problem that applies in the algorithm. But I think that was a shorthand that you said. I think if you thought about it and rewound, you'd say, well, that's not really the problem, per se. The algorithm wasn't biased. The algorithm is just the algorithm. In fact, the algorithm is just simply the mathematical representation of a formula that might probably be changed based on the data itself, as these algorithms are. But it's not the fault of the algorithm. The algorithm is fine. The flaw is in the model and the model that the algorithm generated was flawed because of something else. And that was the underlying data that data has an information quotient to it. Data is a representation of something that's informational. It is always a mirror to the ground truth. It's not the thing itself. It is the abstraction of the thing itself in the same way that the map is not territory. So if the data itself, the information quotient of the data is somehow wrong and therefore biased because there's an implicit bias in society. For example, if you did it on arrest records, right, bookings, or better yet, even convictions. I mean, it would be completely wildly different. A conviction doesn't say that somebody did a crime and it does say that somebody did a crime, got caught, went through the judicial system and got a sentence, right? There's a lot of, there's a lot of chains of causality in that, right? It's because, of course, if you have a really good defense lawyer, right, or you're the sort of person who, for whatever reason, can talk your way out of getting brought to the station at the first instance, you're not even going to get to that last stage. People who don't have a lot of resources and might have the wrong color skin are going to get convicted, as we know from the data, at a much higher rate than other classes, whether you're wealthy or whether you're white, you wouldn't get to that point, right? So the point is that the data itself was biased in some way and therefore when it generated a model, the model recreated that bias that was seen in society. So what do we do about that? So the idea of having a predictive algorithm that could sort of identify where crime is, might be a really valuable tool when police forces are stretched and they need to focus their fire on where the resources are best put to keep our community safe. And that's something that we should all sign up for. I mean, that sounds like a very reasonable thing. I mean, everyone has an interest in public safety as long as it's done well under the rule of law and the constraints that are put on our guardians and our community. It's a never ending battle. However, the idea of bringing those people in is really important. And so what we where we end the book Framers is the idea of vigilance that we need to be masters of this technology, not our servants, that we need to embrace artificial intelligence, but we also need to embrace our humanity and impose our frames on it and direct it where it goes. At the same time, we need to live together and work together, but it's not simply about cooperation. That's the U of L Harare story in which we all need to cooperate. We all need to get on the same page and see things the same way. And that is not what we are saying. We don't need this homogeneity and uniformity of thought, right? Though we lock arms as brothers and sisters of our shared human experience and march into our future. Let us accept our differences. Let us accept the fact that you frame things differently than I frame things and we should allow this furnishing, this flourishing of multiplicity of frames provided that your frame does not invalidate or try to deny the existence of my frame. That is the sort of the red line that we that we invoke like Karl Popper's paradox of intolerance that we can't broach. But barring that, the fact that we don't see things eye to eye, the fact that you interpret the world differently than I do, is not a drawback. It's actually a feature of our world and the only way we're going to solve our global problems together is if we can accept each other's mutual frames and try to find in good faith a way to accommodate them so that we can solve our problems and integrate them for better decision making.