 Welcome everyone, we're back. We are returned and we will continue with our next keynote speech. Let me introduce John Danielsen, Director of the Systemic Risk Centre and Professor of Finance at the London School of Economics. He will address the topic artificial intelligence and systemic risk. Professor Danielsen, the floor is yours. Thank you so much for that. The artificial intelligence is growing very rapidly in use in the private sector. And the private financial institutions are increasingly using AI for a range of discussions. But then the question remains, what impact does that have on the financial system, in particular financial stability and the chance of crisis as manifested by systemic risk? As it turns out, so much of the discussion of AI is dominated by technology and technological issues. And my interest today is to transcend that and move into discussing how AI might affect the financial system itself. Next slide, please. This work, of course, nobody does things but on their own. And this work is joined with Andreas Uttemann at the Bank of Canada. We have a number of papers out there and all our work on AI is indicated on the website below. Next slide. So this, of course, begs the question, what do we mean by AI or if you want to be more technically correct, what type of AI are we discussing? At the end of the day, there are many different AI. While in the context of finance and the financial system, what we mean is a data-driven machine learning algorithm that uses reinforcement learning to achieve objectives. Explicitly, what we are not discussing is something known as a singularity, which means all powerful AI that can solve all the problems humanity faces. But, of course, that creates its own financial stability issues, not my topic today. So basically, generally speaking, it's a computer algorithm that makes decisions human beings would normally do. It finds the best outcomes given the objectives it's given and how it understands the worlds. And along the way, it advises human decision makers and probably makes even independent decisions, using data, the rulebook prices, human decisions to learn from. And the key issues are first, AI needs objectives. It needs to know what to do much more strongly than human beings do. And the compute, meaning the cost of computations for AI is extremely high, running in the many billions of euros. AI is an increasing return to scale business, which of course has consequences as we get down to it. Next slide, please. To summarize, what I plan to discuss today is that, generally speaking, with some exceptions, private sector AI use is positive. There is plenty of data. The rules are known, and the cost of mistakes are low. This applies, means that we get faster, more accurate decisions. We need less staff than before. The supervisors working for the ECB, the risk managers in the private sector and the central bankers, they are all in effect training the artificial intelligence successes. But as we move up the problem scale, AI can also undermine macro-prudential objectives. It can cause systemic risk, perhaps because of collusion, how it amplifies stress, how it leads to booms and bursts, how it can support criminality, terrorism, and even nation-states attacks. Ultimately, AI will be absolutely essential for the resolution of financial crisis, but that is also the area where AI perhaps poses the largest danger to society. And if we think, well, we are only going to use AI for high-level advice, well, AI might choose to present this advice in a way that leaves no alternatives. In effect, it can end up becoming the decision maker in effect. And as we start using AI, it can lead to difficult human capital decisions for the financial authorities. Next slide. So to get a handle on this topic, we have come up with six criteria for how one can evaluate the use of AI within a central bank. And at the end of the talk, I'll connect that to a heat map of particular regulatory actions. The first question is, does the AI engine have enough data? Well, sometimes it does, sometimes it doesn't. Are the rules the AI has to follow static or the language of computer science immutable? Can we give AI clear objectives to follow or do the objectives emerge along the way? Does the authority, the central bank, perhaps the AI is working for, can it make decisions on its own? Does it need to consult other authorities or even the political leadership? When the AI makes mistakes, and of course, it will make mistakes like humans, can we attribute responsibility for misbehavior and mistakes? And finally, when it makes mistakes, are those mistakes catastrophic or easily rectified? Next slide. To come to grips with that, we have identified four conceptual challenges for AI use in financial policy. The first is data. Now, this might sound counterintuitive. The financial system generates enormous amount of data every day, petabytes. Well, that should leave plenty of data for AI to learn from. But there are problems along the way with data. First is that financial data is often poorly measured. They're inconsistent identifiers, databases don't talk to each other, etc. But these are problems that can probably be solved along the way. AI will be very good for that. But the key problems are first, data is confined to silos. The different part of the same regulatory authority might not be allowed to share data between them. Sharing between regulatory agencies is quite often limited and certainly across national borders. So data, even though it is ample, is often not of the right type to be useful for financial policy. And secondly, financial crises are unique from each other. Of course, every crisis has the same three crisis fundamentals. Excessive leverage, self-preservation or trying to stay alive in times of stress, and amplified by the complexity of information asymmetry so inherent in the financial system. However, in detail, in practical details, every crisis is unique. And it couldn't really be any other way because if crises were not unique, the financial authorities like the ECB would easily have been able to prevent them from happening. It's almost self-evidence that crises happen when nobody is looking. And that means to be conceptually, crises are unknown unknowns or in the sense of uncertainty, they're uncertain in the sense of night, frank night. And therefore, taking together both the data limitations and the uniqueness of crisis, they, of course, frustrate the human-centered, macro-potential authorities. But because AI needs data much more strongly than the current setup, both of these things frustrate and get in the way of the artificial intelligence, learning what to do and what it's supposed to do. So both data limitations are unique, so uniqueness of crisis get in the way. Next slide. We then have the problem of how the system strategically reacts to control. Well, the financial system changes in response to regulations. Well, this is just a manifestation of good touch law and the Lucas critique. And of course, this is a problem for all data-driven analysis. But if you primarily depend on data like AI, well, the fact that what you're measuring now and the stochastic processes you model now will not be the stochastic processes in effect once a future stress event happens, that's a real problem. Well, for micro-potential regulations, this is not a big issue because by and large, we can consider the financial system to be static, at least on a time scale, the micro-authority operates. However, the macro-authority and the macro-supervisors, they always have to and certainly do consider how the private sector reacts to them. Well, the problem is, of course, from all macro-potential policy directions, is that the reaction function of the private sector is in effect hidden until at which time we encounter stress. So, we don't know how the private sector reacts. Well, many years ago, I along with Hyun Shin, now at the BAS, we used this to classify risk as exogenous or endogenous. Exogenous risk is the risk as measurable that feeds into risk dashboards. The endogenous risk captures how the system reacts to control and events arriving to it. AI focuses almost exclusively on this exogenous risk, which is not important for micro, but endogenous risk taking into account system reaction is of paramount importance for the macro-authorities. And that is the third reason why AI can be seriously misled when it comes to macro-policy. Next slide. And the final conceptual reason is that the objectives facing the macro-authority are mutable or non-static. In micro, the rulebook is known and mostly static on the time scale the authority makes decisions on. In macro, the more serious an event is, the longer the time scale, the less we know about what the objectives of macro are. And when it comes to the most serious crisis, we only know the objectives of macro-policy at a very high level of abstraction. We don't want a serious damage arising from the financial system. Well, these are not useful operational criteria. And one reason is, as we certainly learned this year, with both the Silicon Valley Bank and Credit Swiss, is that we will do what it takes to resolve a crisis. And we have seen many cases in the past where we changed the law, we suspend the law, perhaps use an emergency session of parliament to change the law in response to financial crisis. And also, such crisis use significant public and private resources in resolution. Both of these means that for the most serious events, the political leadership inevitably has to take charge. And good luck predicting what some future political leadership might do in some future hypothetical stress scenario. We only know what happens when it happens. And ultimately, the resolution processes for the most serious events, they depend critically on information and private interests that only emerge endogenously and intuitively. And this, all of this causes problems for AI. AI needs to know the objectives it's supposed to meet much more strongly than human beings, but AI will find that very difficult. In particular, resolution is very intuitive, but AI, at least in current forms, cannot handle intuition. So those four conceptual problems, they lead to particular channels for instability. Next slide. Next slide again. The first channel of instability for AI use is prosyclic quality, or for how it can cause booms and busts. Risk control systems already have a very high fixed costs. They have in effect become an increasingly returns to scale business. And we see that currently, with the very rapidly going trend of risk management as a service, like BlackRock's Aladdin being a very good example of this, just like we see in the clouds, we see that risk management system, because it's an increasingly returns to scale business, we end up in concentration among a handful of vendors. And the reason, of course, is that because risk analytics, and AI in particular, is so expensive that running it in house might only be possible for the very largest of entities in the public and private sector, most will end up and already are outsourcing to this very small set of private vendors. If you layer on top of the fact that AI is much better than human beings at finding best practices, finding the state of the art solutions, and using the best models, even though it might be positive most of the times, it also has a problem. It leads to homogeneity in beliefs and action. It makes financial institutions see the world and react to the world in the same way, both of which amplify this financial cycle leads to more bumps and bursts. And this means that AI will be much more pro cyclical than the existing human decision making process. Next slide. The second problem arising relates to the incentives of financial institutions. At the risk of oversimplification, we can say that 999 days out of 1000, the bank is trying to maximize profits. On one day out of 1000, when a crisis happens, it's maximizing survival. It wants to stay alive. And that self-preservation instinct is what drives the worst aspects of stress events and crisis. Because of that self-preservation is when we get the flight to safety, we get the investor strikes, the hoarding of liquidity, the credit crunches, because you don't want to supply liquidity into a stress market. You want to be able to use it in case things become seriously bad. That also leads to bank runs and that also leads to fire sales. All of these things are caused by the self-preservation instinct. And here AI again works against the system because it's so fast and it's so accurate that it will jump on solutions. And those solutions, if you want to survive, ultimately mean a behavior that are stress amplifying, like flights to safety bank runs and fire sales. Next slide. Then third channel for how AI can be destabilizing relates to how artificial intelligence engines interact with other artificial intelligence engines. So if you have an AI operating for some bank in the private sector, while AI learn from other AI, so the AI in one part of the system is observing and interacting and changing in response to AI. So the total aggregate decision-making process, both when humans run the system, but of course in particular when you layer AI on top, they are all interacting and they're all learning and they're all changing the system in response to what they find the best way forward. And this can manifest itself in several different ways. Well, AI might choose to attack competing AI in a way that is destabilizing or causes significant private damage. The many AIs across the system, they may choose to manipulate markets perhaps corner markets or lead to booms and busts, which might be individually profitable to the participating institutions, but are socially damaging. They might even in times of stress choose to collaborate to attack the authorities for some private gain, which might include liquidity injections or regulatory forbearance on some other actions. Now, even though such things are by and large illegal, it is much easier for AI to do that for a human being. Because it's their complex, they need coordination. AI is very good at coordination, it's very good at complexity. And ultimately, even though such behavior is illegal, AI might not know they are illegal. You have to tell it what it can, what it can't do. A human being has a much broader, more intuitive understanding of what is acceptable, what is legally acceptable. Because AI lacks that, it can find it very easy to engage in such behavior. And because ultimately, the financial system is infinitely complex, we can't tell it everything it can't do. And because we can't tell it everything, it might find that such illegal, manipulative behavior is the best way forward. And ultimately, that gives someone who is perhaps slightly less ethical a chance to manipulate the system, because if you're running an AI, well, you can excuse yourself and say, well, the AI did it, I didn't know. So AI cannot be held on account or it will not care if you do that. That which gives the operators of it yet another layer of deniability. Next slide. This of course takes us directly to the problem all authorities face, which is they are patrolling what is in effect an infinitely complex financial system. And as the system becomes more complex, not the least because of AI, it gives an opening to those seeking an advantage. Could be criminals, terrorists, or someone else. The authorities have to monitor the entire system, while the person intent on damage only has to monitor or find one loophole. And because the system is in effect infinitely complex, this is what's called an operational research, an empty heart problem, a problem that's in effect technologically impossible to handle regardless of the progression of AI. Next slide. And perhaps moving into slightly more existential domains, which is that if one is intent on damage as a nation state, well, as we start using AI for increasingly more important decisions and taking human beings out of the loop, but seems to be inevitable, that game gives an advantage to those nation states that are intent on damage. These sorts of entities are seeking to use the system to attack or cause some undesirable behavior in times of stress. Well, that can be very difficult to identify in the best of times. However, because if you can program on AI to only exhibit damaging behavior in certain states of the world, any type of stressing, testing might miss out on that. So therefore, you might have no idea which human decision makers are running the system in five years time, but you do know that the AI that you planted some logic bomb into can lead to an undesirable outcome. And that allows you to keep these attack vectors in place for a very long time. And finally, nation states can solve the problem of double coincidence, meaning if you attack the system today, it might not be damaging. If you had attacked the system on March 16, 2020, or October 1st, 2008, when the system was already under stress, if you can find times of existing liquidity crisis and layer on attack on top of that, it works, they work together. And this is what I call the double coincidence. And this is something, therefore, the increased use of AI for decision making facilitates the job of those nation states that are intent on damage. Next slide. And ultimately, of course, as we start using AI in the financial authorities, it can cause fairly difficult human capital problems or cycles and skill sets. So we are already seeing how this works. So as AI use gets increased, it leads to a particular strong demand for individuals with particular knowledge and abilities. But at some future point, we no longer need those and we need people with a different skill set. And I suspect as time passes, we will increasingly have a demand for especially senior agency staff out of both AI knowledge and domain knowledge of financial stability and supervision. But one thing to keep in mind along that such people are extremely valuable and there are not many of them who can be in a sense an expert in both the AI and the domain. And therefore, can the authority or does the authority have the ability to retain such people? And that might mean, of course, that you might end up with supervision and regulation be increasingly outsourced to the private sector is already will then be perhaps the only entity that can run these very valuable AI systems that are so complex and because of an increasing return to scale nature. Next slide. So therefore, if I return to this slide I had earlier in my talk, which is the criteria for evaluating AI. This is why we had the six criteria. AI needs data more strongly than a human being. It performs best when the rules are known and don't change in response to future events. It needs to be given clear objectives much more strongly than a human. It performs best when the authority it works for can make decisions on its own. We need to be at attribute mistakes responsibility for mistakes to somebody which is difficult with AI. And finally, if it makes mistakes, it can go catastrophically wrong much more worse than a human beings and increased AI use even though all of these apply already to the existing setup AI in a way amplifies that. Next slide. And I'm not going to go through all of the items in this heat map, but we can take individual regulatory actions and what the authority does. We can map it onto this list of six items to see where AI is likely to be beneficial and where the risks rising from it are the highest. And the risks are lowest in day to day micro potential regulations fraud, consumer protection, routine forecasting of risks. As we get to systemic crisis or resolution of crisis generally, that is when AI is most needed, but also most damaging. Thank you so much for listening and I'm happy to take any comments in subsequent email exchanges if you so desire. Thank you very much, Professor Danielsson. Very insightful lecture and I have to admit a little bit scary of course, especially when you see all the red fields and all the things that can happen. But obviously as you mentioned with a lot of vigilance and more work in the field, hopefully the catastrophic events won't happen.