 James, ddodd am yw'n ddigon. Ysgol ffigwunitwyr William Gibson rwyf wedi bod yn siwr bod y dyfodol yma yn ymddynt yn ei ffordd iawn. Mae'r ffordd yn ysgol iawn i'r ysgol i'r ysgol yw'r newid yn fawr am ymgyrchol ymddynt, yw'r cyffredin iawn, yw cyffenig a'r cyffredin iawn, ac yw'r cyffredin iawn i'r cyffredin iawn, ac mae'n ei gyd yn i, ac mae'n rhaid i ddodd yn fawr yn fawr ymddynt, ddod o y cyfrystdyn ni'm ar ti'r pryd, a'r ddydwyd brydwyr ni'n gwybod ar hyn o'r llwyafol. A'r ddod o'r chylywb ar hyn o'r cymaint yw am rhaid i'r gofyn ac eich llwyffol. Yna'r ddysgu allan o'r pryd y mae proffeddwn yn gwybod o'r cychwyneth arniadau. Yn safnol, mae ddiolch yn'w gweld i'r cymhysg? Yn y cysydu oharo i'r teimlo, erdw i'n siaradol i'n gwybod i ddych yn ychydig a'r y theme yw gael ar beth hynny yn dweud y cyfnod gyneud. Rwy'n wir o'r hŷn yr uniau phobl psychologigol a FAAI ac yn gweithio Cymru, mae'r cadw i'rheidio cyfnodau cyw enemyll yn y Rhwylau Ffawr yn ddod eich rhan o'r PRFA. A'r James i'r newid hynny mae'r gwahwchau'r hwn yn yr unig o gwybod o hanes ar gyfer hydyn ôl o'r diolethaf iawn melynadau yn gweithio iawn a'r hyn yn ymgynnu. fod balw filed followed the argument that for when we try to understand decisions, when we try to provide decision support, increasingly the awareness of our cognitive limitation shows us how humans will be increasingly out from the loop and that huma machine redeaming might not be the answer to many of our problems. There are examples where psychology clearly shows that we would be better off with people of the team. Then I'll talk a little bit about how the application of AI, particularly in wargaming and simulation is pointing the way to making recommendations that are otherworldly and that we don't understand yet we can empirically show to be better than those decisions made by a human. I will ask the question, what do we do in a world when we don't have explainable AI and we have recommendations that we know we should make if we wish to be as effective as possible but we are concerned about, because we can't explain why we should make them. We are moving beyond narrative in our decision making. Finally, I will bring it back down to Earth and talk about the way I think Cyflawni, Fflawni, Gwyrdraeth, ac A, gallwn cael ei ddigonwyr fyrdd hynny, ac mae'n ffordd arweithio'r ffordd sy'n credu'r ddigonwyr yn y ddigonwyr. A'r hyn o'r ddigonwyr, mae'r ddigonwyr yn y ddigonwyr i'r ffordd a'r ddigonwyr. Mae'n rydw i'r ddigonwyr yw ddigonwyr yn y ddigonwyr rydw i'r ddigonwyr neu mae'n rydw i'r ddigonwyr oedd ymddir y ddigonwyr ac ydych chi'n cyfnod o'r ffordd o'ch chi'n gyd, o'ch chi'n gyd. First premise, my attempt to convince you that you are just data, your biology and behaviour are reducible to maths. Ysraili historian Yufol Noah Harari haes argued that we are all biochemical algorithms. We take an input, we process it computationally in the brain and nervous system and it leads to an output. When you begin to gather that understanding, you can see how you could be manipulated by manipulating those inputs or recording the outputs to draw inferences about human behaviour. It's a point made even more clearly by Peter Watson in his brilliant 2017 book Convergence. He talks about how increasingly the sciences are merging. We always understood that we couldn't understand physics without reference to maths or almost always. We came to understand that we couldn't understand chemistry without reference to physics and ultimately maths. Eventually we got to a point where we understood that you couldn't understand biology without reference to chemistry, physics and ultimately maths. Today that's true in my field of psychology. We understand that you can't understand psychology without reference to, here we go again, to biology, chemistry, physics and ultimately maths. That's true across economics, across medicine, across the human and life sciences. The sciences are slowly converging indeed the invention of the social sciences was specifically to bring quantitative, mathematical or statistical, principally statistical methods to understanding human behaviour in fields where maths was typically alien like maths, like history and geography and other fields. You can begin to see how maths is at the centre of all of this and in my own field of cognitive psychology that's precisely what we do. I would put a group of people like you in a lab, I might run half of the room through the experiment and the other half through the same experiment with a very slight difference in the input that I give to you, what you're asked to do in the task. Then I'll measure the outputs, I'll record everything that you've done and then what I'll do is I'll use statistics to draw inferences about what went on computationally in the centre of Harari's biochemical algorithm and use that to make forecasts about what people like you would do in similar situations. I'm reducing your biology, your behaviour, your decision making to maths. Now that matters because we live today in a virtual panopticon, a world in which we watched more closely than we ever have been before, a world in which there is more data available on us than there has ever been before, a world in which it is like a continuous psychology experiment where data scientists, psychologists and others are able to measure in real time your behaviours and see, make predictions from the behaviours you've done in the past to infer what you might do in the future or to change the inputs that are offered to your decision making to see how that changes the outputs. To get an idea of the volume of that data, it was said a few years ago that by 2020 there would be 5200 gigabytes of data per person for everybody on the planet. So every individual in here, by 2020 it was predicted, would have 5200 gigabytes of data available on them. Now that equates very roughly to something like 18.5 million books. If you gave an intelligence officer like me or my colleagues, 18.5 million books of data on a person, and they couldn't draw some reasonable conclusions, some inferences, some insight and foresight about what you might do in the future, you couldn't make useful predictions, they shouldn't be in a job. It's for this reason that the Harvard economist Sushana Zubov says that soon technology will know us better than we know ourselves. That's particularly true because we are appalling guides to our own behaviour. The sociobiologist Robert Trivers argues in his book, Deceit and Self-Deception, that the biggest lies we tell in life are the ones we tell ourselves. That's one of the reasons why in psychology we're not always interested in what you say you'll do in the future, but by drawing inferences from things you didn't even know about yourself to make predictions about what you might do later on. On the slide I have a cut from Dylan Curran, who's an activist, his Twitter feed in which he describes all the things that Google know, but of course it's not just Twitter, it's not just Google's data, it's vastly more than that. It's your credit card records, albeit anonymised, but we've shown how quickly from anonymised credit card data we can identify individuals again. It's your insurance data, it's your supermarket loyalty card, it's where you go, it's what you do, it's who you talk to. It said that if you have your smartphone with you, you'll have been geolocated some 23 times in the last minute, and 1400 times in the next hour. That means we can begin to infer where you go, what you do, who you talk to, where you sleep, perhaps even who you sleep with. You can see that kind of unprecedented insight and foresight and what it might be able to do to draw inferences about your future behaviour from other people like you and what they have done. Now the first time that this combination of psychology and big data really broke into the public consciousness was with the Cambridge Analytica scandal. I will rehearse briefly what that was, but what I won't do is get into the detail of whether it worked or its political implications. But I think we can draw some really interesting insights that talk about how we make decisions and who we are as people from this case study. There's alleged illegalities and all sorts in that campaign and hence I don't want to get drawn into it. But fundamentally what they did was apply science to understanding how we made decisions, what our preferences were, what we might think about a particular issue. And personally tailor political marketing, or perhaps you might prefer propaganda, to micro targeted audiences having built a profile on for example roughly almost every single American in the United States based on, in their case, on Facebook data. But of course it could have been done on vast arrays of data and even more if you relied on some of those data brokers that I talked about. So specifically what Alexander Nix, whose picture on the slide did, was he drew on the work of Michael Kosinski, a American psychologist who showed in one of his studies that with 10 Facebook likes you could derive a better personality assessment built out from psychology's ocean model than if you gave that abbreviated version of the survey to the individual to complete themselves. So you've got a more accurate impression of who a person was based on 10 Facebook likes than if a co-worker were to fill out that personality survey. If you had 150 likes that personality survey was more accurate than if you had given it to a friend or co-habitant to fill out on the individual. And if you had 300 likes that ocean personality survey, the most reliable measure we have in psychology for sketching out people's personalities, was more accurate than if you gave it to a spouse or partner to fill out. And that's important because the ocean model has been shown to have higher external validity, that is higher accuracy in predicting life outcomes than if we were to ask individuals themselves, on the rivers' line, the biggest lines we tell in life are the ones we tell ourselves. The ocean model has been shown to be able to predict your propensity to fall into substance abuse, your political attitudes, and indeed your physical health. Indeed, sometimes Kosinski's work has shown building a personality survey from big data was more accurate than if you gave the personality survey to the individual to fill out themselves. That was used at scale. The Trump campaign, for example, boasts of having rolled out some, I think, some six million ads in just the last few days of the campaign, personally tailored individual marketing to get an idea of why that was so effective. I'm going to move away from the controversial political domain and into something that Netflix have released publicly on their own blog. So what Netflix do, you all know when you log in, Netflix, like Amazon, will say, hey, you watch these shows, therefore you might like these shows, right? You had these preferences, other people like you might like these shows, and so you might too. But it also does some much more sophisticated things. It doesn't just suggest shows that people like you might watch. What it does is it personally tailors the image that you see to maximise its likely effectiveness on you clicking on it and ultimately watching the show. So we take here the science fiction show, Stranger Things. Now it might work out that I'm the kind of person who, for example, likes, let's say, horror movies. And so it might present me with that picture in the bottom, I guess, right-hand corner as you look at the screen. But perhaps it identifies you as somebody who's more interested in the social side and you're more likely to like buddy movies. And so it picks the single-centre image and presents that to you. Now I don't know whether Netflix do psychometric modelling, whether they build personality models in the way that Cambridge Analytica did. I'd be very surprised if they don't, because ultimately this is the science of influence. It's trying to build pictures of people to maximise the effectiveness of your product. The other thing that Netflix do which is illustrative is they run very rapid A-B testing on this. So all the people like me that watch these shows and perhaps have this personality or psychometric profile, they say, okay, well, there's the image we're going to show, Keith. But if I don't click on it, if I ignore it, and if lots of people like me aren't clicking on it, in real time they change it. And of course with millions of users clicking all over the world all of the time, they're rapidly A-B testing, innovating, adapting, adjusting those images to maximise the effectiveness of, in their case, the marketing that they're showing you. And that is what Nix and others were aspiring to do with Cambridge Analytica to run personally tailored individual marketing campaigns and to run them through A-B testing to refine them continuously to maximise their effectiveness. So what you might say. Well, first of all, before we get too caught up in the illegalities and the concerns that this might raise, we need to recognise that this is just the science of influence. If you wanted to deliver a marketing campaign for example, you always tried to understand your audience to segment it into groups that had particular preferences to tailor your marketing to maximise its effectiveness. What we're seeing here is a tool that is really just the scientific approach to that, drawing on modern, currently available tools and cutting edge research. So the question is, what should we do about it? Should we regulate it? Should we ban it? Should we protect the data? There are huge ethical questions when we know people's intentions better perhaps than they know their own. How far should we go in using this? Particularly when we know that there are states out there that are doing very similar things to our populations across NATO countries and other democracies to try to influence the outcome of elections. It raises a second and related question that the US think tank Rand have been discussing over the last few years. How do we protect our citizens' cognitive security? How do we go about making sure that your decisions are as independent as they possibly can be, accepting that no decision is made free of external influence? And how do we do that whilst protecting individuals' freedom of speech and the right to self-expression? There are no easy answers to any of these questions, but the ones that those of us in the defence and military sector are having to ask ourselves, people in politics are having to ask themselves and people in business increasingly are having to ask. The second implication that I want to draw about this is that it changes how we think about the science of influence, the science of persuasion. Dominic Cummings was the architect of the Vote Leave campaign in the UK. I spoke at a conference alongside him a few years ago to a similar audience, and this case is about 200 people of marketing professionals. Now, most of the marketing professionals unquestionably had voted Remain, so Dominic Cummings was like the anti-Christ on stage to them. And he did everything he could to kind of play up to that image. He talked about how he hadn't won the Vote Leave campaign by employing, as he said it, bullshitting charlatans like you in the audience with bad degrees in gender studies and English literature. We employed mathematicians and physicists to model human behaviour. We did it at unprecedented scale and at unprecedented speed and with unprecedented effectiveness, and the consequence of that is that all of you, he said, talking to the marketing professionals, will be out of their job. Now I guess for this conference, that's good news, all of you might be in a job perhaps working for individuals like this. However, for those of us that are involved in this kind of thing, it goes to show that understanding that humans are just data. It goes to show it's important and it begins to suggest that this is not going to be a sport for amateur enthusiasts. It's going to need people with deeper technical qualifications as well as people with the ethical understanding to continuously set boundaries around a science that we are building even as it is employed all around the world. Now I put up this deliberately overwhelming slide, not to overwhelm you, but in order to illustrate a point. This is a whole raft of mostly psychological research that shows what we can do with that combination of big data and psychology, perhaps enabled by AI. It's this kind of research that has led Emarshul Turner and I to begin working on a pre-concept idea of cognitive manoeuvre, taking a small sample from the science of prediction and saying, well, what could we learn about people's behaviours? What could we do with this science? Let me give a couple of examples. In the 1980s there was a study done on leader's language in the Middle East and it showed that the integrative complexity of a leader's language, a really fancy way of saying how black and white the words they used were, was a good predictor of whether that country was likely to invade a neighbour. Really, that's just sat on the shelf as a kind of interesting curiosity for years. But what now when we live in a virtual panopticon when the volume of data available on us is unprecedented? We've got data from public TV, from CCTV, perhaps even people's personal conversations. What if we can begin to pick up verbal tells, maybe behavioural tells as well, that are predictive of a leader's propensity to invade a neighbour in the way that that research showed? Well, it might enable us to act much, much earlier to deter, at the moment we rely on things like, well, how many tanks are there moving towards the border? Has a country announced conscription? A whole raft of other indicators, that are much more sophisticated and might enable something totally different. If it still sounds like science fiction, let me give you an example from research conducted by a friend of mine in the Department of Experimental Psychology at the University of Oxford. John Gallagher is a reservist military officer and a brilliant psychologist. He's only a young guy and his research is yielding fascinating insights. He was able to show that if you monitored dialogue within Facebook groups between right and left-wing groups and used contact theory, psychological theory pioneered at the University of Oxford, you could make predictions about offline violence based on those interactions in publicly available Facebook groups. He was able to show that you could monitor those groups to predict offline violence such as that at Charlottesburg. That's where we get into what James was talking about earlier, where we're really knocking on the door of some quite science fiction-y stuff. What if we knew that there was likely to be violence just from those interactions before the individuals themselves had even committed to conducting that violence? It might indicate the levels of policing we need and the way in which we prepare and respond to that kind of thing. Again, perhaps you're still thinking this is science fiction, Keith. It's nonsense, so let me give you a third example. In 2016, Andy Haldane, who was then the Deputy Governor of the Bank of England, made a public speech which I'd recommend everybody downloads. He talked about the way in which they were getting after the application of AI to big data to draw unprecedented early insights to inform the Bank of England's monetary policy. He explained how, for decades, the Bank of England had commissioned the University of Michigan to consume a sentiment survey across the US. That is, they would phone people up and say, hey, and then ask a whole load of questions, like a standard survey, so again, self-report data, remember, trivers his thing. The biggest lies we tell in life are the ones we tell ourselves. That's why we're not always reliant on survey data, but it was the best they had. They would build out pictures of public sentiment, consumer sentiment in the US, and then they would use that to say, okay, because consumer sentiment is high or low, therefore Bank of England monetary policy should do this, going into tomorrow when the stock market is open, and they refined that as often as they could. He said that was useful, it was the best we had, all we could use, but we tried something new over the last 12 months. We bought Spotify data, and they did it, I'm almost certain, from one of those data brokers I put up on the slide earlier, Tower Data, Nielsen, Exolate, and these kind of companies that aggregate all that data and then sell it on. And from that Spotify data, they were able to run an algorithm that tried to correlate changes in what people were listening to in the US with changes in the Nasdaq and S&P 500. They were inferring consumer sentiment. I think he talks about mapping the economy in real time. They were inferring consumer sentiment and they showed that Spotify data, the music Americans were listening to, was a better predictor of movements on the Nasdaq and S&P 500 stock exchange than that self-report data from those surveys that the University of Michigan had used for years. Fascinating, Keith, but why do you care? You're a military intelligence officer. Well, I care because if you think about that, we've got music that's listening to us as we listen to it. We've got kindles that read us as we read them. We've got Netflix watching us as we watch it. We've got unprecedented insight into people's mood and behaviour and preferences. Perhaps we could use those kind of data streams to do something similar as what Andy Heldain did and map consumer sentiment to stick with that term for a moment in countries that are hostile to us. Wouldn't it be fascinating to know if you take consumer sentiment as a proxy for national mood and if you stick with me on that and say, okay, maybe national mood is a proxy for what the military talks about morale and if you understand that morale is at the centre of military effectiveness and political influence. If we could map the way in which a country is responding to particular actions, military or otherwise unprecedented insight into that country's psyche, into the way in which the national mood was, in the way in which they were responding to what we were doing, and we'd get nuanced data. We might know that one part of a city was supportive of something that NATO had done while another part was opposed. We might begin to get some really fascinating and important insights that enable us to refine what we do. Now, I could talk about all of the studies on that slide to show this kind of thing, this unprecedented insight and foresight, this real-time mapping of people's intentions to enable forecasting of future actions, but I'll leave it, probably some of you have photographed it, I'm sure you can get access, but you can begin to see the kind of things we might do. The next thing I want to talk about draws on an article I wrote for a blog called War on the Rocks, very well-known in military circles, perhaps not here, but if you're interested, you can download it to get more detail. I want to start drawing from that insight I put up earlier. It's estimated that by 2020 there'll be 5,200 gigabytes of data on each and every one of us in this room, and actually that estimate we already know is an underestimate. The advent of the Internet of Things and wearable tech is meaning that I think the latest estimate was something like 55,000 books of data on each and every one of us every day. Now, if I'm sat at my desk with my team of 10, 20, 100, whatever, analysts, and each of them are getting 55,000 books of data on everybody in the world just from open source data, they're never going to be able to analyse that. It either sits on the shelf, goes unanalyzed, and therefore in the aftermath of something like a terrorist attack, people say, we're all about it. We did, but it was buried in this huge stack of data and we couldn't do anything with it. You're going to have to automate that or you're going to leave data unanalyzed. That's unclassified open source publicly available information, albeit you might have to buy it in. It's before we consider the vast amounts of military data that is held on those people and areas that we're interested in. That's what led Raytheon's vice president a couple of years ago to talk about how the military today is swimming in sensors, and that's something for insight, and most military and security organisations don't use that publicly available information that they talked about. They just use the military data. When you combine those two, we're going to be overwhelmed and consequently we're going to have to automate some of our analytics. I would argue that that's both necessary and desirable and important whether you work in business politics or the military, because we know about humans cognitive limitations as a psychologist. I could describe to you some of those things, information bias, a whole raft of biases that change the way we process information, particularly when it moves at speed, and I give a very brief example there of a civilian casualty incident, an incident where civilians were killed in a Ruskin in Afghanistan when all the information was available to prevent that happening. It just wasn't because of, and it's specifically referenced in the publicly available report afterwards, that people were overwhelmed by the data deluge. That also matters because we say that we will always have a human in the loop, and we say that for political reasons, we say that for legal reasons, we say that because we worry what it means when a military organisation delegates authority for lethal force to a machine. We worry about it, and we should worry about it. It has really big ethical implications. But we should worry about the inverse too. It's perfectly possible that we might be able to show, empirically, that a machine is better at recognizing an enemy combatant, a threat, with a lower force positive and force negative rate than a human in the same position. Particularly when that human is perhaps being shot at themselves. Particularly when there's vast volumes of data available and he or she doesn't know what to look at. Particularly under the pressure of time that is always necessary in any military contest. What that means is that the military might find itself in court, not because we delegated authority to a machine and pushed ethical boundaries, but because we didn't. Because we have a mother of a soldier who was killed, because he or she didn't take the decision to fire when there was a threat that an AI could have detected more accurately and reliably than he or she did. Or from the mother, the father, whatever of a civilian who was killed because a soldier under pressure failed to accurately recognize him or her as a civilian and took the decision to shoot. Because they have a higher, humans might well have a higher force positive and force negative rate on a machine. If we can show that empirically, then the mothers of those soldiers and civilians would both be right to be equally outraged that we didn't delegate that authority. In NATO nations where we're held by the rule of law and where those kind of things are deeply concerning to us, we need to think really hard now about this mantra that will always have a human in the loop. The last thing on this slide I want to touch on is human machine teaming. I don't know how familiar that will be as a concept to this audience, so I'll explain it briefly. After Gary Kasparov was beaten by IBM's Deep Blue in 1997 at the game of chess, remembering that chess was originally a war game, the military looked at that and thought that's really interesting. Perhaps we're going to get to a stage where computers are able to make better decisions than humans are. What should we do about that? But it turned out to be okay for the next decade or so if you teamed a human machine to play a computer, a human and a computer against a computer at chess. The human and the computer, the human machine team always won. That was really good news. Okay, so human and machine teaming is going to be the answer. And we still repeat that mantra. You'll find it in our doctrine. You'll find our senior officers repeating that frequently. I'm not sure that it's true anymore. In chess, for example, when time pressure is on, it's long been the case that the machine would beat the human and machine working together. Horari again, writing in Science Magazine I think two years ago now was summarising data from the world of human machine teaming, or central chess. And he was able to show that increasingly it was the case that even when there was no time pressure computers were now so sophisticated that the computer would consistently beat the human and machine playing together. Our wetware, you might argue, had become the limiting factor on the software. And that's really interesting. I think increasingly in a number of fields we'll find that the machine is making better decisions than the human and there will be examples of where we will have to rely on it and consequently I argue that increasingly in decision making and particularly in decision support humans are going to be out of the loop and off the team and we need to think really carefully about the technical, professional and most importantly the ethical implications of that. And it's not just in intelligence analysis that we need to start thinking hard about how psychology can illuminate the science of decision making and the application of AI and big data. This slide has a key G losing to AlphaGo to an audience like this so I hardly need to rehearse this but again I would remind you that Go, the Chinese game was originally a war game designed to test generals and see how good they are and what we saw when key G lost to AlphaGo was that if any of you have watched the documentary you'll see this happen live there's a point where the machine makes a move that it shows it doesn't think any human would have made in X million different iterations of a game and yet it has calculated across such a huge mathematical game space and across a depth of recursive reasoning so deep that it's got beyond the human reliance on narrative. Any psychologist will tell you that the way we understand information in humans is through telling stories it's through narrative. Even a science paper has a beginning, a middle and an end and tells you a story to make sense of the data. What was happening here was that the artificial intelligence that Google's deep mind had built was getting beyond narrative and that's why Michael Redmond who's the leading western player of Go said this is causing me massive problems Redmond makes his living as a commentator on the game and he said there's something inhuman about the way AlphaGo plays I can't attach a story to what it's doing he said it's like watching a creature from the future a human with a superior intellect perhaps even an alien playing the game it's making these otherworldly moves really, really fascinating stuff and it says to us okay well what about when we apply an AI in a war game and it has calculated probabilities across a mathematical space so broad and so deep that it's got well beyond the human ability to keep up and it makes a recommendation that we don't understand perhaps some of you think this is total science fiction we'll consider the next grand challenge that DeepMind have just taken on DeepMind applied AlphaStar in StarCraft 2 and for any student of war StarCraft 2 should sound familiar is based on game theory so there's no one optimal decision that always wins right like a paper scissors rock what I do depends on what you do and my optimal decision depends on your decision so it's based on game theory it's based on imperfect information like any military campaign you have to conduct reconnaissance to find out what's going on it's based on long term planning so a decision I make at the start of the game might lead to me losing later on and there's nothing I can do about it it takes place in real time in a large really large action space and what it points to since StarCraft 2 is a master status in fact it's the front cover of ScienceMag I think just this week since that happened it points to the way in which AI might revolutionise warfare perhaps you still think this is science fiction well the Chinese openly in their doctrine now talk about moving to intelligent sized warfare the application of AI at every level to make decisions because they believe ultimately it will outperform humans as happened in Go as it happened long ago in chess and now in StarCraft 2 the Russians in a media report last year talked about how they're now applying AI to command red and blue force commanders in massive war games to help them make decisions and so consequently we need to think much more carefully about what this means what happens when an AI makes a decision like it did when Michael Redmond can't explain it what happens when it calculates that recursive possibility I think that you think that I think therefore we should do this and I just can't apply a story to it we are probably going to have to accept recommendations that we don't understand but I recommend that in response to a Russian incursion in Moldova we should put in an opera in Baku and I can't explain to my commander why it says that but I know because we played the game a million times because we've seen it in the past that it outperforms us we're not going to have explainable AI always but we may have to trust what it recommends and finally to my last point as I see the clock tick down here and the point I most want to drive home even if you think all that is science fiction consider this I think the first thing that the application of automation to apply in its broadest sense from simple macros and scripts through expert systems to machine learning is going to revolutionise decision making even if we choose never to apply it the reason for that is if you ran an artificial intelligence, an AI algorithm of some sort, however complex in that kind of simulation that I'm talking about it might tell you that you need to go we would say in the military left flanking up the beach or right flanking or up the middle and it would say we should do that with 90% confidence on the left up the centre and it would have done that based on the weight of fire, based on the armour based on the level of training, based on the weather whole load of things but it will give you a mathematical forecast of the likely outcome it will say x probability and y timeframe here's the thing AI might be a black box but humans aren't a crystal box it's very hard often to disentangle why we made the decisions that we did and that matters because we need to compare whether automated analytics or whether they're supporting a decision or making it outperform the human and what that means is it's going to be much much more careful in describing how it is we make forecasts and decisions so to illustrate that, as I said I'm an intelligence officer I have been for the last 18 years I can't tell you how good I am which is astounding really I don't know whether forecasts I make with 80% probability come to pass 80% at the time 0% at the time or 100% at the time and that means I don't know how good I am you might think well that matters because psychology shows us that a forecaster's confidence is inversely correlated with a forecaster's likely accuracy so the more confident I am in the forecast I make the more likely I am to be wrong okay you might say that's interesting well it matters even more because we also know that the more confident I am in making a forecast the more likely you and my commanders are to believe me forecasting confidence is positively correlated with your chances of being believed and trusted so we have a situation in which the people who are most trusted are the people that are most likely to be wrong so the first thing we're going to have to start doing is really picking apart carefully how we make forecasts and that might be true and crucial in life saving in my domain of military and security analysis but it's just as important for all of you whether you're in business or politics or other fields we don't know how good we are because we say we're too busy to baseline our performance as we are and it matters if you're making decisions too because we don't know how good we are at making decisions we don't know whether decisions that we make and we say okay I'm going to make this investment decision I'm going to make this business decision or I'm going to make this operational decision in the military we don't track okay sir mom whoever is making that decision why did you make it, break out the premises apply probabilities, apply some maths even if it's heuristically that performance to the performance of an algorithm that is either making that decision or supporting that decision and see if we get better, we just don't do it because we say we're too busy so the last point I want to make is even if you think most of what I said before precognition, that application of AI to offering otherworldly advice that we don't understand is too far fetched at the very least I hope I've shown how the combination of psychology, big data and AI will change how we understand ourselves how we make decisions and how we campaign whether it's in the military whether it's in the political or whether it's in the domain of business and marketing in ways that are unprecedented and revolutionary many thanks