 Good afternoon ladies and gentlemen and welcome to this press conference from the 47th annual meeting of the World Economic Forum in Snowy Davos. Welcome to all of you here in the room and to the folks watching us online and on Facebook. And of course a special welcome to the panel of this press conference today, which I'll introduce to you in a second. The forum has been talking about the Fourth Industrial Revolution for a while now. It's been the theme of the annual meeting last year and it's still going strong. If you flip through the program, you see it's still an issue that's right at the heart and center of what we're talking about. We've given this press conference the title, Risk Transformation in the Fourth Industrial Revolution. There you have it. You could also say if you leave out the usual forum jargon, how dangerous is technological change? How dangerous is that Fourth Industrial Revolution? What are the dark sides? So we have a wonderful panel to talk about this here today. We have a little bit of optimistic outlook. I think we have a little bit of pessimistic outlook, but it is about risk that we want to talk about. Without further ado, let me introduce to you who's joining me at the panel here. To my immediate left we're joined by David Kenney, who's the Senior Vice President of IBM Watson, the IBM Corporation. Everybody knows IBM Watson and we've been talking about it actually earlier this week here. We launched an exciting city cancer challenge with the UICC. They've been talking about the powerful, powerful capabilities of diagnostics that IBM Watson has. But I'll leave that to the expert to talk about that in a second. Right at the center of the panel we're joined by Inge Biel, who's the Chief Executive Officer of Lloyd's based in the United Kingdom. So welcome to both of you. Without further ado, David, technology is bringing fantastic new innovations. So I'll let you share some sunshine and talk about the exciting new things that technology does. But I'll also like to, you know, already stop talking about whether the risk, not so much as the risk, but whether the dangers that you see. Yeah, so, well, first of all, we are optimistic about the technology because it's helping people in terrific and wonderful ways and in many ways taking some risk out of their lives. So in the case of health, I think to better understand what's going on with a human to better predict a diabetic or an epileptic event, to better predict cancer and find the best treatment of that, it's keeping people safer. So people who are looking for oil and gas are actually using Watson to look at all the prior projects, understand where the prior risks were, and avoid them the next time. In the financial markets, I think being able to mark a portfolio at a market every day and predict the future of a commercial mortgage portfolio fundamentally helps the bank be more stable and better assess its risk. So we see wonderful things that the Fourth Industrial Revolution, AI, do, mostly to more accurately predict the future. Of course we have to be concerned about risk. I would say that the things that we care about in our principles are, first of all, we're very careful about the purpose to make sure that it's being used for a good purpose for the benefit of humanity. And I think people have to declare their purpose on this technology more transparently. And secondly, we care about increasing the amount of transparency and trust in the system so that when we're giving one of those answers about cancer or about oil rigs or about a mortgage portfolio, we're clear what evidence we used to make that prediction so that it enables the human to understand the underlying data behind it. In the past, so many decisions made by humans are made by gut feel and intuition, personal relationships, which actually don't make better decisions. But when we're making the decisions we're making, which are fact-based, I think we're obligated to show the facts and the confidence we have in that answer. Thank you. Inga, over to you. You represent the insurance side, the insurance view of this. So you're up all night about risks and what could go wrong, right? So what can go wrong in this case? Yeah, that's right. But actually, I am also very optimistic about the industrial, the fourth industrial revolution because I honestly believe it's making the world better for everybody. However, when we look at it from the risk perspective, and we've been a leading insurer of the sort of forefront of risk. So whenever a new risk came along, Lloyds was always out there ensuring it, and we go back hundreds of years, so we've seen much change in the world over those years. But the digitalization of everything, the use of technology, the use of artificial intelligence is fundamentally changing business models. And it's hard to believe that there will be a business out there in the future that isn't going to be a technology company of some sort. So this is happening, and I do believe it's for the good, so we don't want to discourage any of that whatsoever. But for us, we're trying now to manage the risks associated with this. And I believe that the change is happening so rapidly that we're perhaps not keeping pace with it. There's a huge area of unknowns such as the legal structures, the legal liability. If something goes wrong, which of course is when the insurance industry is most valuable, something goes wrong, we'll be there to help put people and businesses back on their feet. We don't know now, with some of these new advancements, who's legally liable for what? You've only got to look at some of the autonomous vehicle development or whatever. Who is going to be liable for what in the future? And that is one of the trickiest things for us as insurers to work out. So what liability will fall on businesses? What liability will fall on individuals? It's all very unclear, and while there are existing laws out there, they might not necessarily be moving and adapting to this new environment. Everything also is being so interconnected. Now when we have traditionally looked at risks, we have been very focused, big mega-risks. They've pretended to be natural catastrophes, a huge earthquake somewhere, some dreadful hurricane, typhoon, some really natural disaster with lots of tragic loss of human life. But generally they were confined to some sort of geographic boundary. We could make an estimate of aggregate exposures within an area, within a physical area. The advancement of technology crosses those boundaries. It knows no geographic boundaries. So how are we going to get our arms around? How do all of these risks and exposures get aggregated together? And that's one of the toughest things we're facing at the moment, is to understand the systemic risk that's out there in this interconnected world. Thank you. And if David and IBM Watson came to your office next Monday and said, can you ensure us what would your answer be? Well, as I said, we're always looking for new innovative products that we can come up with. If we can unpick the exposures, we are very happy to then price in a fair premium for those exposures. One of the things that came out of some recent activity with the sort of globalization that's been going on, if I just think of the tragic event in Japan, the nuclear explosion and how that then disrupted people from the tsunami and everything, how that then disrupted businesses across the world, people had no idea of the repercussions across the world or something like that. Now we have a much better idea about supply chains, how they interconnect, but fundamentally they're still sort of supply chains in much of a physical environment, shipping goods around the world. This interconnectivity on a digital scale, we can't really get our arms around, we can't understand what trigger, what event will happen in one country or one part of the world that could lead to something else in another part of the world. We'd be very happy to actually then try and use artificial intelligence to help us come up with the conundrums that artificial intelligence is producing. So it's a bit of a circular thing, but very happy to see how artificial intelligence can actually help our insurance industry understand risk better. So David, you put Inga and her industry in a very tough spot. Has the technology sector in general failed maybe a little bit to anticipate what the sheer speed of the progress could bring in terms of risk? Well actually the insurance industry is one of the biggest users of Watson. So I think these are complicated questions and it's part of where machine learning can help. Back to being transparent, if you can understand all the factors that drove an outcome, you can then draw threads to the interconnectivity and you can build models. They're beyond what a human would comprehend because they're pretty sophisticated and they take petabytes and petabytes of data in lines and lines of code, but it is happening. So I would actually say it's not a difficult spot. I actually think this is a problem we can work on together. Yes, there are new risks, but there are also new ways of understanding those risks. And I just wanted to pick up on that, working together. I think that's so important. The insurers can't work this out. We can't work it out on our own. We're going to have to work together in collaboration. We need a lot of data to be able to do our analytics and pricing. Some of that data that we need, we can't get hold of. People don't want to share it necessarily. So we want to work with bodies and other partners who could perhaps help us with some of the data provision, but we've importantly got to work with the technology companies, the people developing this stuff, so that we can understand what they're designing and work out what the risks are. So definitely collaboration is going to be an important part of this. Thank you. This is a press conference, and the gentleman all the way in the back is perfectly right to raise his arms. Let's open the floor for Q&A. We have a microphone coming, and if I could ask you to state your name and organization, please, for the sake of our online audience. Thank you. Jitendra Joshi with AFP, Agence France Press. Two questions, if I may, for Ms. Beale. Firstly, one of the themes surrounding AI and the future of work is one of the forecasts that we're hearing here and elsewhere is that millions of white collar jobs, including in insurance and other financial services, could be at risk if a lot of back office works notably gets automated. Is that something, you know, is that a future that you can envisage yourself? Secondly, if I could just get any of your view on Theresa May's speech yesterday, committing Britain to exiting the single market, whether that has any implications for Lloyd's business could Lloyd's of London become Lloyd's of Frankfurt? Okay, thank you. All right, so maybe a little bit off topic, but I'll cover the first one. A little bit, yes. Yes, so jobs potentially at risk, but I think we should think of it more as jobs are going to be changing. When I started working in insurance 35 years ago this year, actually, when I think about what I did as an underwriter in insurance, it was quite a bit different what an underwriter does today. So all the time jobs are evolving and changing as we get more sophisticated, as we've been embracing technology more. So for sure, jobs are at risk and the underwriters job in insurance is very much at risk. I'm now the CEO and I understand CEO at risk is pretty low on the list, so I think CEOs feel reasonably confident that we're not going to be replaced by artificial intelligence, but I'm sure there will be a time. Yes, of course, but there will be new jobs. There will be creation of new jobs. Jobs that we cannot imagine could exist will exist in the future. And of course, the thing is we've also got to think about people living longer. So we've got people now, many, many people expected to live beyond 100 years. What's that going to mean for education, for skilling people, for jobs? It's going to be a part of history where people think of going to school, learning a trade, getting a job, and that's what they do for life. The world isn't going to be like that. Maybe perhaps not in my working lifetime, but certainly in the future for the next generations that come along. Jobs are going to be completely different. AI is just one of those factors that is going to dramatically change the way we have to look at jobs and keeping our workforce busy. Now Brexit, I'll just quickly mention that. Lloyd's has an important part to pay for insurance within the EU. The EU business that comes into the Lloyd's market, excluding the UK, is 11% of the global revenue, so it's an important part. However, Lloyd's has big trading relationships with many other markets around the world, so we will not be moving the headquarters from London, it will be staying there, but in order to access the European business and importantly for our customers and policy holders to be able to buy the specialist insurance that we offer, we are going to have to set up a subsidiary in one of the EU countries. So that's a piece of work in progress at the moment and we will be hoping to make a decision on that city within the EU that would be the right one for Lloyd's by the end of this first quarter. Thank you very much. Can I get a sense from the audience again if there are any more questions? Or are you still wondering whether IBM Watson could have predicted Brexit? Yes, we have a question over there on the left from the lady please. Hi, Sky News Arabia coming from the Middle East. I'm just wondering, you're talking about liability and the forced industrial revolution in the future. I was wondering the autonomous cars, how would you guys deal with that and if something happens, who would you blame? David, I think that's one for you. Well, so we spend a lot of time on all sorts of autonomous things including cars. So, because today the liability is the driver, but of course the software is then driving. So, I think it needs to be sorted, but ultimately I believe in a world where they will be truly autonomous. Drivers actually create more risks. The software will drive them and will interact with other cars that are driving and the manufacturers of those vehicles will construct that software. So, I think that's the way we're going to have to look at it. The same way there's liability in anything else that's automated for you today, the software has to bear some accountability. And a lot of the car manufacturers have already said, yes, we understand, it's going to be then our product and the whole liability and who needs to buy the insurance could switch to them. But every evidence is that an autonomous car is safer than a human driven car. I think that the trick is going to be the middle ground when both are on the road. Everything we're autonomous, I don't think we have any accidents at all. Yeah, and it's actually just reminding me of something else, which is about cyber and cyber attacks and breaches and all of that, which is a huge area of concern, but also opportunity, particularly for insurers. But nearly two thirds, about two thirds, some surveys say even close to 90%. 90% of breaches that happen are actually because of humans, human error or some, say, an employee doing something malicious. So, that's a very interesting factor because we're always thinking, well, it's just these machines and technology, it's doing all these bad things. About three quarters of all breaches are triggered and they're because of a human. So, I think it's just very interesting. We think about the humans and how they interact with technology and how we get that right in the future. Thank you very much. Let me chip in with a question. You are two of about 1,800 senior business leaders here in Davos. If you talk to your fellow business, private sector representatives here, do you get the sense that they understand the risk implications of technology or are we still in the early phase of the fourth industrial revolution where everybody is so excited about this? Well, listen, I think most of our colleagues, smart CEOs, are absolutely thinking about protecting and growing their asset base. So, they do think about risks and they understand them. I think the interesting dialogue this year versus last is exactly the point just made, which is the real risk in the whole thing are humans. And so, how do we train people so that the humans work better with the machines and how do we safeguard that the humans don't screw up the machines? So, I think this dialogue between man and machine is a newer dialogue this year, which I'm excited about. I actually think it's closer to finding solutions than looking at them as two separate issues. And we need to get the balance right. We must absolutely encourage innovation and progress. So, we don't want to frighten people and put them off doing this stuff. So, it's really important, but people obviously get excited about innovation and they should, but they mustn't forget the risks associated with it. And what we're trying to do is just work together to really understand those exposures and those risks much better. Thank you very much. Any more questions from the floor? No, I think we've answered it all. Yes, looking at the time, I think then. No reason to keep you here any longer. Thank you very much for being here. Thank you very much for watching and thank you for answering all our questions. It's been a pleasure. Thank you.