 Good morning, and to all our online viewers, good afternoon, good evening, from wherever you may be joining us. My name is Sadia Zahidi, I'm a managing director at the World Economic Forum. AI has been top of mind here in the conversation so far in Davos, and I think top of mind for many of us who've been using the low cost or no cost tools that have become available over the last year or so. There are of course many more things that are impacting the future of jobs, but of course artificial intelligence and specifically large language models are one big piece of that. And we tried to look at the World Economic Forum as to what some of the impact might be. And we broke down about 800 jobs into 19,000 tasks and tried to understand what might be the impact of LLMs. And overall, 40% of all of those tasks could be impacted. Now that exposure could be either automation or augmentation, but 60% of tasks across those 800 jobs, across those 19,000 tasks are not impacted. And maybe a second point to get our conversation started. The jobs with the highest potential for augmentation. So here what you can see is in the light blue are the tasks within those jobs that are exposed to automation. In that medium blue, you've got the tasks that are exposed to augmentation, which can be a very good thing. And then there are in dark blue the tasks that are unaffected. Now of course we picked there just the jobs that had the highest potential for augmentation of certain tasks. And I'm sure our panel will tell us a lot more about their views on this. This is simply one analysis and there are many others as well. And to guide us through that conversation, I'm gonna turn it over to Francine Lacroix. Francine, how are you? Thank you, Saadi. I mean, I'm really excited about this panel because we have industry leaders who will also be able to give us an insight on exactly how you use some of the AI tools that are available, how you see it changing. And then we also have some great surveys give us a glimpse into the future of the workforce. So I could not be more delighted to introduce Nicolas Yaronimus, Chief Executive Officer of L'Oréal, Paul Hudson, Chief Executive Officer of Sanofi, Christie Hoffman, General Secretary UNI Global Union Azim Azhar, Chief Executive Officer, Exponential View, and Joe Zoglu, the Global Chief Executive Officer of Deloitte. So we're gonna have a good conversation to first of all try and figure out what exactly we're talking about because we're talking about transformation. Azim, you understand this, right? Frankly, more than most. You put it in simple words when you talk about augmentation AI. Like what kind of wave, transformational wave do you see in the next 12 months, two years, and then 10 years? Thank you very much, Francine. I think we've all been surprised by some of the tools that Sadia referred to earlier on. I think one of the incoming assumptions a few years ago was that AI tools would tackle the routine cognitive work, first of all, the everyday tasks. And what we've discovered through these large language models is they're also applicable to what we might have thought of as creative, discretionary, strategic thinking. So in some of the research that's come out in the past year, we've seen that quite high salary jobs, quite discretionary jobs in consulting and strategy lend themselves to augmentation productivity improvements when you pair a talented human with these very early, early days AI models. A talented human job. I mean, this is what you kind of, so you go around the world consulting, but also you've done surveys on actually how transformational this could be. Well, we recently conducted a survey of 2,800 C-suite level executives, and I would summarize their current feeling about all this as equal parts excited and overwhelmed. They're excited because the use cases are transformative, whether it be in the context of drug discovery or transforming, manufacturing by creating digital twins, much more effective and efficient call centers, and you can just go on and on. But there's a lot of hard work to do. There's too much shiny objects right now. There's too many buzzwords. There's a tremendous amount of IT modernization that needs to take place to actually get the data in a state that's usable. There's legitimate issues around privacy. Companies don't want their data leaking out to train public LLMs. There's issues around intellectual property. So I like the way you framed the question in terms of two months, two years, 10 years. We generally overstate the impact in a very short horizon. And understate the impact over a longer horizon. This is going to have a huge impact. It's not going to be overnight. Paula, are you excited or overwhelmed? I'm both, but I'm in particular excited. I mean, it's interesting. The genie's out of the bottle, right? It's up. It's out. Everybody's using it. It's a name for 11,000 people using AI on a daily basis. I talked to a lot of my peers. There's a mixture of fear and security and data privacy. But it's up, and it's running, and it's doing things that are incredible. And I was struck by the comments and the surveys. You know, in me, for the rock, paper, scissors of it, AI beats human, but AI plus human beats AI. And I think we have to get to that point now where we just realize that we have a huge culture change to go through because people are very defensive. People are protecting data. People don't want to share. People don't understand federated learning. People don't understand that my LLM and my algorithm will be so much more powerful, I share more. And people are not good at sharing inside organizations and indeed across sectors and within sectors. And this is just the tip of the iceberg. For me, we can drug undruggable diseases. What a way to start. And it's going to be enabled by large language models. Yeah, and of course, it will change depending on the industry. So it's different to Sanofi that if you run Lohial, how are you looking at this? Again, and in what timeframe? Well, we are very excited today. I mean, we've been using AI like everybody for a long time now to boost our formulation processes, to augment our researchers. But what we see, the possibilities of Gen AI as it relates to creativity, for example, in its capacity to augment our teams, we are a very creative company. We invent products. We create images and the capacity to use these models to boost our creativity is phenomenal, and already we are working on some advertising backshots. We have made the decision not to use fake humans in our advertising, but as it relates to products, to images, we do fantastic things. So I think short term or midterm, because we need time to train everybody. We've already trained 6,000 people, but we have 90,000. But it will be a fantastic augmentation, I think the title of the panel is very well chosen of our employees. So there are lots of things to pay attention to, and I guess we'll talk again about it, but it's very exciting. Christian, you focus, of course, on jobs, on humans, on workers. So how worried are you about not putting, I guess, the workers and the humans for this technology? So I represent workers across service industries, including ranging from professional athletes on one side to caregivers and cleaners on the other, but also including finance, IT, and call centers and workers and telephones. So everybody's afraid. What does this mean for me? The reality is some sectors are gonna be way more heavily, as you pointed out, call centers. That's the big one, where there's already been LLMs in use for a few years, so we have some data to look at. So I think, and then on the other side, on the media industry, we've seen two strikes this year over getting the right to negotiate guardrails around the use of their images and their writing. So I think in the creatives, there's a lot of fear and they're taking steps to address that through bargaining. But in the other industries, I think there's a mix of this could be great. Some of the evidence coming out of call centers, for example, is, yeah, this makes our job easier and better. On the other hand, we need the opportunity to sit down and negotiate with the employer to make sure it's fairly implemented, to make sure we get our fair benefit from it, not all the gains flow to the business, that it's fair and that some of the risks are mitigated, including job security. I would say if two-year tenure horizon, or I think it will be more gradual than what people are fearing right now and that we've seen some of these big transitions for bank workers, for example, where many jobs have been eliminated over decades and it's been done in a way, at least where there are unions, which is respectful of managing through attrition and early retirement and so on. So I don't think it necessarily means a huge displacement, just an adjustment. Nikola? Yeah, I wanted to react on the employee point of view because what we're seeing, first of all, right now, short-term, it's a job creator. I mean, half of the jobs of the hireings we've been doing over the last three years have been either related to data or to AI. So it's right now, it's creating jobs. And midterm, I see my teams, they're all working too much and they're desperately hoping to have some sort of solution that helps them crunch the data, come up with better PowerPoints and not waste hours doing them. So I think I really see this, of course, there may be some industries or some type of jobs where it's gonna be a bit more radical, but I see this as a real way to free time and probably get our employees to have a bit of a world-life balance. I mean, I think Azim stressed me out on Monday where he told me, you know, a room full, including all of my bosses, that I need to be 20% more productive thanks to AI in a couple of years. Well, I mean, you may feel that that's pressure, right? That's certainly what some of the data is showing, that right across job tasks and categories, you get 20, 30, 40% productivity gains. I think one thing we have to be very wary of is what has happened historically. When we saw automation appear in the manufacturing in the 19th century in the UK, there was a 60-year period where wages fell relative to economic growth. It was called The Engels' Pause by Economists and we need to learn from history and say what are the things that we can do to avoid a multi-decade situation like that, which did lead to some political unrest whereby we got to a better social contract by the 20th century. But we can go into that with our eyes open. And I think Christie's point about collective bargaining and how well workers' rights are respected becomes really, really important at this moment. Once we sail through the exuberance of what the technology can do, we have to keep an eye on the workers' ability to participate in that decision-making. Yeah, Paul? Yeah, I think it's interesting. I think the newspaper headlines are about job losses. But the reality is the nature of work has changed. We all know that. And we also know that, as Nicola said, we're recruiting more and more people. They may not be the same people doing the same things, but we try our best to make sure that we can retrain and reskill. But I think there is this journey to more meaningful work. People don't want to do PowerPoint. They want to be weaponized, amplified. Do they not? I mean, it's a while, to be honest, me, personally. But I understand that it's not my favorite subject to get involved in. But we don't spend enough time talking about what this will enable that is not possible by human beings. I'll try and give a small example. You know, on my phone, I have all the company's data, coded and protected, of course, 130 terabytes of data every day, analyzed real-time, the equivalent of 14 million Excel spreadsheets that would take 60,000 people daily to give me an insight. And I get Instagram reels of where to look in my business, for opportunity, for risk, real-time. No human involved in it at all. And our people move from doing analytics to working with insights to doing something to have an impact. And the productivity gains that come from that, the speed that comes from that. I worry less, in fact. We have no objective, of course, to reduce the number of people, because AI can do that. We have a big objective to increase productivity, and we have an even bigger objective to establish more insights that can lead to a more valuable delivery of healthcare for patients. And that's so exciting. Most of these things can't be done by a human being not even one good at Excel or PowerPoint. That's the main skill. For the last 15 years, finally we're getting rid of it. Joe. I think part of the challenge here is that we're taking what we can see and is available today, and we're trying to project the impact over a long period of time. The environment's not static. In fact, we're on a sharp upward logarithmic curve in terms of how quickly these models are gaining sophistication. So it's a little dangerous to predict too much based on the state of play today. All of that said, you have a big debate out there in terms of how this ultimately plays out through a macroeconomic productivity, job creative versus destruction lens. This is going to make work more meaningful. This is going to make people more productive. There's no doubt. This is also gonna take some elements that are currently done by people and allow those tasks to be performed by the AI. Now in every prior wave of technological innovation, there have been far more new roles for humans created than the old ones that were destroyed. That's still the consensus-based case here that ultimately you see more net new job creation. Now whether it all happens in the same timeline, we certainly don't want a 60-year gap in terms of generations it takes to replace those jobs given the social consequences, but there are some who have expressed a concern that this time is different because the technology is moving so quickly up the curve of human capability that this will not replicate the phenomena of sort of past waves of innovation. And that has a whole host of new consequences associated with it. But sort of the smart money best estimate is that over a foreseeable horizon, this is a net job creator. This makes people's lives better. I assume the concern of course is that you're no longer putting the human with the technology that the technology could actually go past, I guess, human behavior. And there are a lot of also faults. If you look at some of the LLMs we're talking about in the green room, like there's still, I don't know whether you need better chips, better, you know, it will just get better, but there are a lot of faults in what we're looking at now in the LLMs. There are. We should also look at what happens with augmentation. The, when the first chess computers beat humans for a few years, the best way to play chess was called central chess. You took a human augmented by the computer and they would beat computers working on their own. That period lasted less than a decade before you're just better off getting the computer to run. And that's certainly been the case in other areas. For example, GPS, when the first GPS is came out, we were better with the GPS. Now we just trust ways. One of the challenges that we have to contend with a scenario we have to play with is that this augmentation period that we're all excited about around here only lasts for a few years. As the capabilities of these systems and process change eliminates the space of the human. And so we then need to think about how quickly can we create the new jobs right across our economy that will no doubt emerge. And I think that's going to be one of the challenges. So do we understand, again, so I guess the concern is that it just goes so fast that a lot of chief executives that are not sitting here don't think about this retraining, don't think about actually what the future looks like longer term. Yes, I think they have to think both about the augmentation over the next few years and the possibility that turns into wide scale task replacement and how they're going to create the new roles. Chrissy? Yeah, let me just come in on the point that you made earlier about 20% increase in productivity. There has been a study that was published about a month ago that said by 2030 the average white collar worker or 80% of white collar workers could do their same job in four days than they do in five. So I think when we talk about jobs we also have to think about should we move towards shorter work weeks. And as you pointed out, work-life balance, we use, you know, some of the stuff at uni uses chat GPT and they're happy to have more time, you know, to get the same amount of work, you know, potentially done in less time. So there's not anybody who doesn't think, wow, I could, you know, do my bullet points more quickly with some support or some other basic simple tasks. But I do think that going back to the question of the future we have to be more open-minded about reconstructing a work week as well. I mean, it doesn't necessarily mean that 20% of people lose their jobs. It could mean that we're working four-day weeks. We don't wanna replace that experience from the 19th century which gave rise, of course, to the Luddites and a whole series of uprisings around the use of technology. You know, we don't wanna replace that. We want this to be, you know, a win-win, you know, and that's really our ambition is to make sure workers are embedded in the process and bargaining has a role. Nicolas, four-day week? Well, you know, it's been early days, to be honest, because we're looking in the future because we're right now in a period where people had started working from home with COVID. So we bring, we brought our employees back to the office and they have the possibility to work from home two days a week. And it's very, very important today rather than, you know, before talking about four-day weeks is to have people work together. Again, we're a creative industry and I know so many employees and of so many other companies in L'Oreal that have been working from home for months and that have absolutely no attachment, no passion, no creativity. So right now my topic is more than before we get this fantastic benefit from the LLMs and from January TVI to have people to continue to work together. Yet, I know that even if you work three or even if you work five days a week today, you have very long hours at L'Oreal and I think that as far as I'm concerned, if we can get people to work even 10% less and have more time just to think, to discuss, to spend time at the coffee machine and brainstorm, that would be a fantastic achievement and I truly believe in that. And I think just to move to another topic, something we may discuss and Paul alluded to it, I think this AI and this GNI will also bring fantastic benefits to the consumers in terms of helping them make decisions, helping them treat their health, helping them choose the right beauty products amongst the jungle of products that are there in the market. That's, I think that's a very, also a very positive effect of GNI. Yeah, and I wanna come back to both you and Paul actually to see exactly how you're using it now, right? Because you're dealing with companies with Pacific, sometimes it's all a bit theoretic. Joe, can you give us an insight into, again, what sectors right now are using AI or augmentation the most and whether that will change over time? So you're seeing ubiquitous use cases. There are some that have sort of leaped out in front, I'm sure Paul can comment on life sciences which is sort of have some companies that are at the forefront here given the use cases. But this point about the consumer is really important. That ultimately we need to demonstrate that this is gonna make real people's lives better. And when you look at the potential of this technology to solve for some of humanity's greatest challenges, this is gonna be a big part of the equation relative to climate science. This is gonna be a big part of the equation relative to new treatments that extend the sort of length and quality of human life. This is gonna be a big part of the equation in terms of food security. And yet, what are we spending our time talking about? It's gonna take the jobs away. It's gonna result in privacy concerns. We're sitting here sort of riling people up over the risks and my fear is that we're in a bit of a race here. That we have to win the hearts and minds of society to see that this is ultimately gonna be a huge benefit to people so that we don't let the concerns influence the political and regulatory process so much that too many guardrails are put in place before we ever even get the chance to demonstrate the benefit. Yeah, but we have a whole bunch of elections this year and Nazim, if you don't take all the workers with you, as we know, it polarizes. Right. And so you have extreme politics. Yeah, I think it's a balancing act. So I think Joe's observation, which is we need to find pragmatic positive stories about the technology, about the potential of the technology is really, really important. It's time for a reset about how we talk about AI and augmentation, less about unalloyed fears and much, much more about pragmatic real steps that improve workers, improve the work day and improve things for consumers. Christie's vision of the four-day work week, we can build on that. 40% of Gen Z workers have side hustles. And so how do we start to rethink the nature of the employment contract that enables these young, energised, creative people to work, to allow them to keep on their side shows, selling second-hand sneakers on Instagram or whatever it happens to be? So we have an opportunity actually as part of this technological shift to stick to some nice positive stories that are practical and also start to rethink that relationship between the worker and the firm. Nazim, who gets it right? So in terms of politicians, again, we need a big thinker, a big regulator. Like, who do you turn to? I mean, I struggle to find the visionary politician who articulates this well from the major economies. I think some of the smaller economies, turn to Estonia, for example, can talk about the digital transition in a very, very articulate way, but we have experts from other countries who may have better perspectives. Paul, give me a sense, maybe, concretely. So what does it mean for a cent of feeders? So in life sciences, there's so much that you can do, like, if not protein folding, just the advancement of medicine and technology. Yeah, I mean, just before that, I don't think... I think there's a lot of discussion about cloud sovereignty and about rules and regulations. Of course there should be, but I think it's overshadowing the incredible opportunity that we have. And in healthcare, we believe we're the first healthcare company to use AI at scale. We take two approaches. See, Alex, from in silica in the audience, who may be the first person to bring a drug through phase two that was born out of AI. It's never been done before. I think he's got a shot at it. We may work together on it. There is, we just simply can't imagine enough chemical structures and biological structures to be able to find a solution. And, you know, about 10% of diseases have medicines, which means there's so much out there that we just can't find a mechanism to treat or we don't understand the pathway. We're changing that. It's a no-fee, so we have two types of AI. Expert AI, which is working on structural biology, trying to improve understandings of inflammatory processes to bring relief for people. That's about 6,000 people doing highly bespoke, supercomputing lots of power, lots of GPU work to do things that have never been done before. Then the other 85,000 people in the company are using what are called snackable AI, which is what I referred to earlier, getting nudged to make them more effective. And this is a real challenge for people because most of my peers in healthcare just want to do pilots. I'm not even sure they use cases at this point. They want to do pilots. They're worried about cyber threat. They don't ever want their data to leave the house. They don't understand federated learning and training algorithms, leaving data in safe places, particularly patient data. Which is kind of fair, Paul, isn't it? I mean, you don't want to just eat. I don't want my data to be everywhere. I think large language models work because they're large. And so one of the interesting things is I've got all Sanofi's data, right? I've got 50 years of toxicology data, and it means that I have a great understanding of what that looks like, and I can use AI to curate it. Isn't it better in healthcare that all the companies put that data together? Now, when you say that, and then we can make better medicines that are safer, faster, the reality is people go, oh, but it's my data. But the truth is, I don't want to see your data. I just want to train my algorithm on your data, so my algorithm is more effective. So I've got more chance. Oh, by the way, you can train yours on my data, because what matters is more medicines get created for patients. And there is, so we have two levels of stress. One is, do companies really adopt AI? I say no, but hopefully they will. And do sectors understand what working together in a pre-competitive space can do to elevate all boats, to do something incredible certainly in healthcare? Does it definitely help with innovation or if there's not this competition element to life sciences? Does it actually stifle innovation? Because we're looking at the same data. Yeah, but I think as with any major industries, a lot of people do things that are very similar. It's who does more with it. It normally wins. I think this era of my data is my thing. You find that your data won't be enough to train large enough to get the insights that others that share will get. And I think most people haven't made that leap yet. And we will, because we see it as a competitive advantage. Nicola, can you give me a sense? Again, you have a very large company and you've touched on this. I mean, it goes from like marketing to training to of course how you sell. You can match color foundations easier, which I guess is good for the consumer and good for your sales. But can you give us some concrete examples of how you're using some of this technology now and how that will change? Yeah, well, I could give you maybe two or three examples but quickly first, as for research, AI is an obvious booster. We are like many companies in the middle of a regulation frenzy that forces us to reformulate a lot of our products. And if we were to do it the old way, it would take us a century and we'd never meet the deadlines. So we have AI-powered formulation tools that go full time faster. And that invent, to be honest, structures, formulas that our scientists would not have come up with. So that's not only an accelerator but also a door opener. That's internally. And then there's what we do for the consumer. I just come back from the CES in Vegas where I had the honor to do the opening keynotes. And first of all, I was fascinated by the amount of innovations that were powered by AI and particularly in the healthcare domain. But as far as we're concerned, for example, we introduced something called Beauty Genius which is a conversational help for women who need to be recommended a beauty routine that would analyze their face, their hair and have an exchange discussed. So it's a human-to-human or human-to-AI discussion and it's extremely powerful in terms of solving a big pain point for consumers. So you have on the one hand the research that powered and the other hand, how do we make the consumer happier? How do we answer faster their calls? Today it takes us 11 hours on average to answer one of the 80 million queries we had. Every year we'll do it in less than one hour and it will be more accurate. So all these things are things that are powered by AI and we've just opened our Loreal GPT to have our safe space for discussion. So you're not sharing? You're not sharing cross-industry? We're not sharing. We're not sharing so much. But we have to disagree on a few things, as well. As far as the panel. I mean, again, when you speak to chief executives and do surveys, do they want to share? I mean, sharing is better, but then if you're competitors and it's not necessarily a better outcome overall or is it? So this is new. We're in, if you use your baseball analogy, the first inning, the initial reaction is to look inward and protect. So the predominant conversation with clients is we want to leverage the technology. We certainly want to take advantage of everyone else's data that's been used to develop the public large language models, but then any use of our own proprietary data, we want to stay within the company's own named LLM. So essentially take your proprietary data and layer it on to the available public LLM to customize it. Now, over time, I suspect we will agree that there is societal value in certain use cases to sharing. Now, what mechanism exists between industry consortia, regulators, government stepping in, that if we can demonstrate that we're actually gonna create a better healthcare outcome, a greater likelihood of solving some disease by taking patient data across a whole host of different organizations and aggregating it, but you then go to lowest common denominator. What happens if eight agree to do it and two don't? So that brings up the question of the role of governments, the role of regulation. For those use cases where there's such a compelling societal value to sharing. But Joe, so I spend a lot of time also thinking about climate change and how you have a uniform set of data. I mean, we don't really have a uniform set of data right now, how you collect data, so you can analyze, but you could be analyzing different things. We don't even have a uniform set of data within individual companies. So this is part of the problem is it's easy to talk at 30,000 feet about all of the potential and all of the use cases. If you look at the state of corporate IT right now, you have a lot of legacy systems, mainframes from decades ago, things patched together in manual processes. That's not a recipe to feed in its scale to a large language model. So we've got to do the hard work to actually modernize the IT environments. And then we can have the conversation around whether we can aggregate across multiple companies under common data standards. Chrissy, where do you see the most excitement on some of these LLMs and the changes that will go through either regionally or country by country or just like sectors? And again, it's different if you're in life science than if you're working in something else. Yeah, I mean, we represent workers in services. So in science, it's clear there's a whole nother set of potential that are very exciting. I think in services, you know, right now, there's been very, very little application of Gen AI at scale in large service situations apart from call centers. So, and that's been studied. We see a little bit in finance and banking. And of course, for the actors and writers in the media sector, you know, putting that apart because there is a question of using their image and their voice and so on. And that's kind of a different use case than, you know, that's more about ownership and control over image. And I think that's a different issue that has to be dealt with in the context of copyright law and of course, mixed with collective bargaining. But finance and banking, so we were just having a conversation with also the ECB president. She was saying, you know, how they actually look at data and go through data now is really generated by AI because it's just easier to go through social media to understand how the consumer is feeling. And so it gives us, gives them, they think better visibility into the data they're looking at. I think that in the finance sector, for the workers, they've been going through such a long period of impacts through technology. I mean, so not generative AI so much as algorithmic management and all kinds of other tools that have eventually led to a combination of both surveillance on one hand that workers really do not like. And so that's sort of the old, I don't want to use the word old generation of technology because it's very much in effect in across a lot of industries. And ranging from warehouses to a bank worker, I think the question of the algorithmic management has been so deeply unpopular and not necessarily resulted in more productivity rather than control versus I would say enhancement. It's not an augmentation. It's really just we're keeping an eye on you and you're gonna really meet certain quotas. So that's been very unpopular. But there has been a gradual job loss in the banking industry because of ATMs, for example, leading to fewer bank branches in the developed world. And there's more in others. But so I think in finance, there's not necessarily a view from workers that they've great, we're welcoming this technology. I do think that there is a need. There's a lot of discussion in Europe where there's plenty of unions and works councils and so on to really engage in this process, but it hasn't been like, isn't this great? It has not been that kind of experience for them. I'm gonna go to Ukraine in a second, but Azim, can you give us actually what you're worried about? I know you're optimistic about AI and some of the solutions, but there are things we're very optimistic on this panel. And there are still pitfalls that chief executives that world leaders and C-suite need to look out for. I think one of the hardest areas is the unknowability about all of this. You know, we can't really plan for what will happen. And we thought that AI would be extremely precise, like a cold Vulcan, and it turns out to be a little bit fictitious, a little bit hallucinatory, and we now have to deal with that fallout. We thought that AI would not help people be empathetic, and it turns out from recent studies at doctors who use LLMs deliver their news with a higher degree of empathy. We're not even certain whether the general models will always outperform highly specialist models. There's always been the case, the reverse has been true. The specialist beats the generalist in a particular use case. With LLMs, that might not be the case. So I think one of the challenges for any leader is to go into this, seeing the opportunity, but also recognizing that nobody, not even the scientists, really understand the technology and how it might play out. And that could lead us into making, regulating too early. It could lead us into making decisions that we can't... We then want to reverse from. So we have to figure out the architecture that allows us to explore, deploy, and do all of that safely while continually revising our fundamental assumptions. But Azim, do you have any insights? Again, how do you know a system works? How do you know the LLMs work? I mean, we sometimes, you know, at Bloomberg, play around and kind of say... We prod them a lot, and hope they do. But it's so risky, right? It is risky. They could get it wrong. There's no real way to check that what you're looking at is real. There's no real way to check, but we are making very, very slow progress. And I think part of the challenge of there's no real way to check is that, at a mathematical level, there's no real way to check. It's not just that it's hard. And so one of the questions will be, over the next few years, do we develop new technical safety protocols or do we develop new architectures that they themselves are more reliable and deliver the reliability that we need? Paul's shaking his head. He's saying, absolutely not. They are reliable. I look at our, you know, I reference the 130 terabytes. I mean, when we started doing... When I joined the company, the budget process was like 3,000 slides and it lasted forever. And, you know, last year, it was 30 slides. It was an AI-based case. It's 99.3% to 99.9% accurate for the following year's performance. And we get to make investment decisions for upside. We get to talk about, instead of the politics of presenting slides about a budget and trying to... everybody's sort of hedging, it can be highly accurate, certainly financially. And with the word itself, it takes some effort to avoid the hallucination, but you, particularly in areas where you have good context, you very quickly can work that out with better prompting. And I don't think there's any real drama about that. I mean, I Google a lot of stuff that comes up really wrong, so I don't even know... You know, chat to PT is a whole other problem, Joe. Well, you're honing in on what may be one of the, sort of, fundamental thresholds for adoption, this question of how do you know it's right? Right. Well, how do you know humans are right? So let's take a use case. Driving a car. Every year, I think the statistic globally is a half a million people are killed in accidents caused by human beings. If I told you that we had a technology driven by generative AI that would, sort of, only kill 50,000 people around the world, what would you say? I mean, logically, rationally, one might say, wow, you're gonna cut down deaths by 90%. That sounds great. That is probably not human nature. If you went forward and said, we're gonna use this technology that's going to kill 50,000 people a year, people generally hold the technology to a higher standard than they hold the imperfections of humans. And so we have a societal issue to navigate relative to what we set the standard at. Are we comparing it to perfection or are we comparing it to the current alternative, which is a very imperfect human decision-making process? Yes, I mean, I would turn to trusted news sources, right? How do you know it's right? It's like, if you're a news organization, you have a number of steps that you need to check and it's checked by humans. Some of these LLMs, you just don't know. If I put, what's the best makeup brand in the world? Royale. Yeah, but so what if it doesn't say, right, Royale? It means that I have to understand why. I have to understand why, but it's true. I mean, we've all read about the hallucinations. So the capital of Canada is Toronto because that's what's most commonly found on the internet. But I guess the system will have to learn, but probably one of the questions is, should we at some point tag very clearly what comes from J&I rather than not from? I mean, if I take on images, I think it's important. If we have fake interviews from any of us, the next panel was fake Francine speaking and saying horrible things. How do we make sure this is the real Francine? There's still a lot of questions and frankly, I'm not capable of answering them right now. Especially in elections. Especially in the year of elections. So there are worries. And I think the big question around regulation is should we regulate the science, which seems hard or as some panelists have said earlier, should we regulate the use cases and make some, encourage some and try to forbid some? I think that's a big question for the future. Zima actually was really good. So I asked him about hallucinations. Like when you Google, not when you put something, chat GPT and you had a great explanation, which I understand 80% of it. But it's basically the way that the model's done, right? Well, it's just that it learns the sort of underlying concept. So it learns that, you know, one top university is like another top university. So likely for you Francine to, if it gets it wrong, it'll put you at another great university. It won't suggest that you did athletics training when you left high school. Because that's too far a cognitive leap. So, but I think the scientists are making a lot of progress in dealing with factuality. But I just, I think that we can't be certain as to when that those results will be delivered and put into the systems that people are using. So, yeah, different in the science world, I guess. It's different. Look, we, we, I don't think we can get to the next generation of a golden generation of science without artificial intelligence. I think it's impossible to imagine what's going to happen. It may take time. We all know that. But I think it's, I think it's extraordinary. Just, just back to use cases. I'm interested in the driver's car thing, just to make a quick comment. My daughter lives in San Francisco. And, and after 10pm, between 10pm and 7am, she takes a driverless taxi. Because it's safer. She would rather be in a fully autonomous vehicle than on her own because that's an emotional choice and a fear choice. And I think sometimes there's risks of accidents, but there's also, we have to remember, the use cases are very different depending where we are. I listened to a defence CEO talking recently about AI. And they said, perhaps Nicola said, we've been using AI a long time in a lot of industries. But he, but she made this really important comment. She said, you know, because of the role of human. She said, look, if a missile is launched against the United States, human beings do not respond. AI responds. You can't accept, intercept a missile flying at 5,000 miles an hour with a human being. AI takes over and AI will defend the United States. If you launch a missile on another country, a human decides. Because AI does not have the moral compass to make the right decision. She said, and we bifurcate how we support two completely different approaches on a very important subject. She said, because one, a human can't do, and one, a human must do. And I just think the world is going to end up a little bit like that. For us in science, humans can't imagine what could be done to treat cancers on their own. And we just have to accept that we're going to break new ground by doing both things. But do you remember the floating balloon that was over the US? I mean, again, I'm sure before launching, it was a human that decided that actually, let's hold on to see what's so... And the Americans didn't see it because humans couldn't process the volume of sensor data that their sensors would produce, right? So they now have to use machines to do a little bit more than that. Kristi, I mean, when you look... So we haven't been in the house. I'm going to ask you, all of you, in 30 seconds, what are you watching out for in developments in AI and augmentation over the next 12 months? Is there something that you're excited about that changes? I think excited from a worker point of view would not be the right word to use. I mean, I think that, yes, as part of humanity, we are all, it's all exciting, but the people I represent are anxious to know, what does this mean for me? I think you cannot, and I understand, we don't want the anxiety to preclude the progress if that's the message, of course, but in order for that anxiety not to preclude that process, they need some guarantees, some commitments that they're on that journey together with their employers that they understand what's happening, there's transparency, they have a role, they have a right, and that they'll be part of the process, both of avoiding risks, such as they may be, which could be safety, it could be some level, some training that they need and all of that, but also to support deployment. And all the studies, the OECD study on AI came out and said it's much, people are much happier where they've been consulted and brought into the process, and they're much more eager to use it. So I think this is where we're at, like we want people to be excited, but we also, we have to address the anxiety and the fear of like, you see these headlines, 40% of jobs, what does that mean for me? So that's, yeah. Joe in 20 seconds, the next 12 months, I'm gonna go 20 seconds each to finish this off. Well, at Deloitte, we're all in, I certainly understand the fears. We've seen this movie, we can't stop the technological progress, you mentioned the leadites, smashing this stuff. We've got to embrace it and do it in sort of an ethical, responsible way, but recognize this is moving and it's moving quickly. Paul? Look, most people are using AI already. I used to go into the street, I can't believe I did this, I used to walk in the street, put my hand in the air to get a taxi. How random a chance of me getting a taxi is that? I can now use Uber because it's more predictable. It's here, I think it's leadership, it's training, it's support, it's recognition that we have to find the right rules and regulations, but it's such a privileged leader, company like Sanofi at a time when AI is the most disruptive it can possibly be. Well, if last year was the year of extreme fears about extremely unlikely outcomes, I think this year we have a chance to have generative positive, practical conversations about the technology right across society. So that's what I'm looking for in 2024. Nicola? Well, I think the key question is trust. In the end, we've been talking a lot about it, is how do we ensure that there is trust at every byte, as in terabyte? And that's data privacy, it's the ethics of algorithms, I mean, you're talking about in my domain, a skin color race, there's lots of biases that we have to work around and to make sure do not happen. So it's, we have to make sure to work together to make sure that this great progress, this revolution doesn't have more downsides, would add sustainability. You know, the computing power of these gen AI things is incredible. So everybody's talking about future technical solutions that are gonna make this sustainability problem go away, but right now it's there. So it is still, I'm very optimistic and excited, but still worrying. And if I may add one last thing as we're talking about workers, because this is the first industrial revolution for white colors, a lot of blue colors are actually happy. I'm talking about, you know, representing hairdressers here. They won't be, hair won't be cut by AI, but they will use AI to make sure you've got the right hair color. So that's what I'm talking about. Happy, happy world. Thank you so much everyone for a wonderful conversation. Thank you.