 Perfect. Perfect. And maybe let's talk a little bit about, you know, what exactly does a risk practitioner do? You know, what is like the day-to-day things that you get up to? I think there isn't a day-to-day, I think being a risk practitioner is really a very wide canvas. You can have three risk practitioners that do very different things on a day-to-day basis. And they could be, by the way, the same company, but they still do very different things. So at one end of the spectrum, you could have an actuary working on some monitoring activities around asset exposures. And then at the other end of the spectrum, you could have someone working on overseeing how the health and safety risk is being managed in the same business. And those two people would still be reporting to the same CRO, but doing very different things. And I'm sure that we'll be having very different days in terms of the interaction and the stakeholders with which they are engaging. Okay, perfect. And I suppose that will only add to what I've said, is that for me, that variety is really what makes the work working in a risk function interesting. I suppose I've been blessed with having a number of roles that have exposed me to very different activities ranging from impact analysis to lobbying and engaging with external stakeholders to working with the regulators to explain why something is being done in a particular way to then having conversations internally about how to address a specific regulatory issues. Okay, great. And I mean, what is the difference between enterprise risk management and risk management? That's a very good question. I'm not sure to be perfectly honest if there is a textbook answer to that question, but I'll give you my answer to that question. I think risk management is exactly as it says, it's about managing risks. I think it is regarded as a more traditional terms that has two specific connotations. Number one, it has the connotation of being about managing a specific risk that could be, for example, health and safety or could be anything to do with asset management. And I think that the key point then is in isolation of the risks that might exist in the business. And the second aspect of, or second connotation, I should say, of risk management that comes across in that distinction is really about the purpose of risk management. ERM brings A, that holistic perspective of all risks into consideration, which is challenging, but nevertheless something that has to be tackled and addressed as effectively as possible. But enterprise risk management should also bring a different perspective of the purpose of risk management, which is about how risk management can add value to the business. So as opposed to just to protect the business from bad things happening, which is obviously very important. And it's perhaps the bread and butter of any risk function, but one should not stop at that where the purpose of risk management should be much wider. And should include helping the business makes the right decisions to ensure that it achieves its objectives. Okay, because I mean, for a long time, the risk manager has kind of been seen as the fun police, almost as someone who blocks business decisions, like saying, oh, we can't do that because it's too risky. We can't do that because it's dangerous. But I mean, in business, in order to create value, you need to take those risks, you need to do it. So I think your ERM tries to also look at the upside instead of just focusing on the downside. But the question comes in, and how does technology now play a role with risk management? I mean, you're a risk practitioner, but you've got a big interest in artificial intelligence. Is this because to some people, it might seem as a quite a strange match. So where do the two come in? Is technology to help risk management or is risk management there to help implementation of technology? That's a good question again. I think technology here, or when I think about technology and risk management, I'm not thinking specifically about technology for the risk management function. I'm thinking technology for the business. I'm then thinking that one of the questions that the business should have when it's thinking about technology is what is going to be the impact on the risk profile of the business. And that's then a question that the business that the risk management should consider jointly. So that's my kind of angle, if you want. And perhaps the other point I would add to what I said is that the flip side of that argument, if you want, is about the adoption of technology. And I refer to AI perhaps as a convenience, if you want. I'm thinking here about the adoption of technologies more generally. And the question I think that many business have is how on earth do we then go about bringing those technologies that we can see outside into our business. And my proposition, if you want, is that if you start thinking about the impact that the technology might have on your risk profile, if you start thinking about governance and how your business operates, that's a way of starting the conversation. And I emphasize that starting the conversation as opposed to ending the conversation because how you end up the conversation will really depend on many, many things like, for example, what type of technology you're thinking about. But I think the challenge is sometimes how you start. So if I can give you a simple analogy, the challenge sometimes is not how to climb the Everest. The challenge is how you go from the base to the base camp from which you then launch the final assault on the summit. So there is a similar challenge here. I feel that people sometimes are looking at fintech that exists outside of their own businesses as a little bit as the Everest or climbing the Everest. And my argument is that thinking about risk management, governance, operating models, that's a way of pushing you towards their scent. It may take you as far as the base camp from which you can then launch the summit. But it's not the full answer. And I'm very clear about that because one cannot answer that question of what's the right technology for a business in isolation, whether AI in this process or that process is the right thing. But one can start those conversations if you start thinking about risk management, governance, impact on risk profiles that you want to achieve or you want to avoid and similar things. So I like to think if you want also about infrastructure. Risk management is at one level a little bit of infrastructure for the business. And if you're bringing something very new or relatively new, if you wish, like artificial intelligence, some blockchain technology, you should be thinking about your infrastructure and if that's the right one. Okay. Because I mean, this is the thing is we were talking as actuaries that models are useful, but they're always wrong and that we shouldn't use them as the final decision maker. They should always be that human element that assists with it. But seeing the rise of AI and data science and this new thing, decision science, do you think that we're going to get to a stage where the models or the technology are going to be allowed to make certain decisions such as increasing premiums, reducing excesses on certain products without the need of human oversight? Or do you think that that type of transformation is a little bit too far-fetched at the given time? I mean, to be honest, I don't think it can be ruled out. To be honest, I think also it might be happening in other areas. For example, I think it might be happening in credit cards businesses. When you apply, I don't think there may be any, in all cases, there will be an individual looking at my decision. It may be fully automated. So I think the key point about your question is for me about the target operating model, that in which you want that technology to operate. And it's not necessarily a binary question and it's not necessarily a stable answer either. So for example, at the moment, we have the operating models tend to have a significant human component that in many cases is perhaps not very, it could be regarded as not the best use of anyone's time. So at the other end of the spectrum, you've got operating models where the decisions are fully automated. Now, as you clearly hint and I agree with you, one needs a fair amount of confidence on models to rely on them. There are also operating models in the middle where models make decisions with inserting parameters. And then outside those parameters, then they call in or they alert a human to then consider the case and then take a decision. And obviously there are also other scenarios where the AI tools are really acting as advisors and just giving a much more sophisticated recommendation to the analyst. You see that, for example, very often in the medical field, in particular in radiography, where you need a very keen and sharp eye and being able to recall similar shady pictures of an ailment. And my understanding again is that this is an area where AI tools are used to give the analyst, the radiographer, a very sophisticated recommendation of what is it that you're looking at. Is it, for example, a cancer, or is it just something a bit of water that doesn't really, it's not a matter of concern. So yes, there might be scenarios in the future, perhaps much more prevalent today where decisions are automated, but I'm comfortable that there are many operating models that sit between fact and where we are today that could be equally beneficial for businesses and for individuals. Yeah, I mean, I remember reading that in your article, you spoke about with the breast cancer, the percentage of the practitioner picking up and the percentage chance or the accuracy of the AI picking it up. And they were both around 92, 95. But when you added them together, you got a much higher, I think it was up to 99%. The interesting thing with that example is when we look at, let's say the economics of having a practitioner, they are expensive, all these analysts, especially in the medical profession, they are expensive. Whereas we need medical care around the world, specifically in poorer regions. And I mean, this is a big opportunity is if AI can can almost learn that next 5%, okay, increase that accuracy, we can then be helping people all over the world be able to pick up on these symptoms before they become a problem. So 100% agree that, yeah, it's great to have the two together, but sometimes it will be cool to have the AI kind of making all the decisions. And I think, is that maybe going to happen in the future? Or do you think there is maybe just a bit of an upper bound on the technology? Or like I said, is it too difficult to say at this time? I think it's both an upper bound. And I think it's a case where the answer will vary between the many different interactions that we have. So for example, it may be that it happens earlier within a five or 10 year horizon for premium, but for medical issues, it may take much longer, if not, because the downside of getting it wrong are very different.