 Just out of curiosity, who is here yesterday? Just so, OK, a few people, because we're going to have some overlap with some of the things I spoke about yesterday. Yesterday, I gave a framework for understanding how to think about AI, and also some of the macro impacts that these technologies are going to have on society. Today, what I'll try to do is bring that framework to life with examples, particularly towards this industry. So again, that's boring. It's always embarrassing when people tell you about me. And if you remember yesterday, I talked about automation. I talked about building systems that make decisions, and tomorrow they make the same decision. And that the definition of stupidity was doing the same thing over again, expecting a different answer. And that by definition, automation is stupid. Not that it's not valuable. These technologies are incredibly valuable. They drive a huge amount of value in business. But there are many definitions of AI. And if you remember, I won't test you. But the most popular definition and perhaps the weakest is getting computers to do things that humans can do. It's a very popular definition. And I think yesterday I tried to argue why that wasn't a very sensible definition. Humans can find patterns in about four dimensions. We can solve problems up to seven. Computers can find patterns in thousands of dimensions. They can solve problems with thousands of moving parts. So benchmarking machines against humans is a very silly thing to do. So the definition of AI that I wanted to get across yesterday comes from the definition of intelligence, which is goal directed adaptive systems. Systems, essentially trying to achieve a goal. They move towards that goal, but ultimately try to learn from the decisions that they're making. And we know that for the most part most systems in production don't adapt themselves. Almost everything that we currently do in industry and AI looks like this. Building safe adaptive systems is extremely hard. If you remember a few years ago, Microsoft launched a bot on Twitter. Lots of teenagers decided to tease that Twitter bot and it became a sexist, racist bot very quickly. That's what happens when you put systems in production that can adapt themselves. They can adapt themselves in ways that you can't predict. True paradigm of AI systems that can safely adapt themselves in production. And then I gave a little bit of a history lesson. I think it's important to have this recap because it's important to distinguish the different types of AI that are out there. So this is AI in the 60s and 70s. We would write down lots of rules. This is Socrates. So if I say to you, Socrates is a man and all man are mortal. I can infer that Socrates is mortal. And so we would write down lots of rules and try to infer new knowledge from those rules. And we know it didn't really scale, it didn't really work. In the 80s and 90s, a new type of AI came along with this model and how brains work. And I mentioned that 20 years ago, my PhD was trying to build the brains of bumblebees. Bumblebees have a million brain cells. They can do very, very smart things. 20 years ago, it was impossible to build brains of a million neurons. Then what happened is that we had these agent-based systems again where you had these different knowledge sources all arguing and trying to come up with a better solution. And then what happened is that we got brains again. So now we have brains. And now these brains are billions of neurons. And we can teach these brains to do things that humans can do. This is what we're currently calling large LLMs or generative AI. These brains are really good at knowing things about the world. They're really good at telling you what they know about the world. They are not good at reasoning and they're not good at decision making. So actually my expectation, AI has been quite cyclic over the past 70 years. First, it was these agent-based reasoning, agents having different knowledges about the world all arguing at each other. Then we had brains and then we had agent-based reasoning and now we have brains. And the next iteration actually is going to be agent-based reasoning again. We're gonna have all of these different brains that have different perspectives on the world, different biases, different experiences in terms of the data that they've been trained on and they will be arguing to try to achieve a better goal. So the next iteration in AI are actually these large language models that are all trying to coordinate towards a single answer. But as I said, looking at AI through technology, looking at AI through definitions isn't very useful. And I argue that actually over the past 10 years, new algorithms, new data, computers allowed us to do interesting things. I talked about six categories of interesting things and what I want to do today is I want to bring those categories to life with respect to this industry. So the first category, if you remember, was task automation. Now, in our organizations, we have lots and lots and lots of repetitive, mundane tasks that we're doing and what we want to do is ultimately use relatively straightforward, simple algorithms, macros to free people up from those mundane tasks. Like, for example, checking whether a piece of content aligns with your brand identity. It's something that's repetitive, it's mundane. We can use actually relatively simple algorithms to solve that problem. Just to give you kind of like something that we do beyond what's inside the organization, one of the things that we use AI for or relatively simple algorithms is try to identify the right imagery to match the text. Again, these are not necessarily sophisticated algorithms but this is a relatively simple thing that we can do to make sure that we've got the right imagery matching the right text. So this is automation. The second category is task automation. It is content generation and as I said, content generation has now become quite important. Everybody's excited about large language models and the ability to create imagery and sound and eventually videos. And so one idea I want you to take away today is that it's not good enough just to be able to generate generic content. If you go to mid-journey or Dali or whatever and you say create an image of a cat holding a pen in space, it will do a very good job of creating a general image of a cat holding a pen in space. But if you are working on the Mont Blanc account, for example, if you're a creative working on Mont Blanc, you want that to be a Mont Blanc pen, you want it to be a Mont Blanc cat, probably a black and white cat. So you want that content to be styled according to that brand. So what we can do is we can train large language models to understand the identity of brands and then create content that is brand specific. That's really quite exciting. And maybe just an example of something externally that we've done, one of the campaigns that we've done here is the Millmade ad. So we can use Generative AI to take a historic painting in this case. And I think we all can now do this in Adobe Photoshop. We can extend that image using Generative AI to now tell a much interesting, richer story. So this is an example of where we can use Generative AI to create campaigns. The third category was human representation. And again, I would be talking a few years ago about how we can take AI to replace people in call centers or salespeople by things that look and behave exactly like a human. But one of the ideas I want you to take away with you today is a concept of understanding audiences. So historically what we would do is we would try to correlate pixels, what happens within a piece of content, what happens within an ad with success, with clicks and activation and likes. But there's a whole load of information there that's missing which is what goes on in people's heads when they look at a piece of content. You create a very complex narrative in your mind that has history, context, nostalgia, excitement. And before a year ago, it was very difficult to understand what happens in people's heads. What we can do with large language models is we can essentially build brains that represent how audiences think and perceive. And we can then take those signals and we can use those signals to then better predict activation, which I think is very, very exciting. Now, what's also exciting as I mentioned yesterday, we can build brains that represent cultures, minority groups, political parties, newspapers, and we can show those brains content to see how they react. Are we offending anybody? Are we violating the illegal constraints? For example, in the UK, you're not allowed to have advertisements where you have people that look successful consuming alcohol. You're not allowed to have that. It breaks the advertisement laws. So you'd want some of your brains to be able to spot that to say that is inappropriate content. But again, what I wanted to do is show you an ad which I'm sure you've seen before. This is where we used, I think a very famous Bollywood actor. So we trained a large language model on his character and we then used him as a deep fake to then popularize essentially corner shops, very, very kind of small brands, essentially selling their goods. So this is an example of how we can take a human being, replace it by something that looks exactly like a human being and we used it in one of our campaigns. The fourth category is insight extraction. So if you remember yesterday, I talked about how over the past 10 years, we've been using machine learning data science to extract insights from data, hoping that those insights ultimately lead to better decisions. And if you remember, I would argue that giving human beings better insights doesn't necessarily lead to better decisions. I'm a big advocate of solving the decision problem first and then working backwards. But what is really powerful about these technologies is that they can not only predict things about the world, they can explain why that world exists. And it's that explanation that's very, very powerful. It's not only important to be able to explain things because it helps us understand the world better, but soon regulation will put more and more pressure on us to be able to explain how our algorithms are making decisions. That really the only difference between software and AIs is that some AIs are opaque in terms of how they make decisions. Now, sometimes it doesn't matter. It doesn't matter if you don't know how an AI is making its decision because it doesn't have a material impact on people's lives. But if it does have a material impact on people's lives, you need to make sure that that algorithm is explainable so that it's transparent, it's auditable, and it's governable. So I'm a big advocate of making sure that all of the algorithms that we're developing are explainable, not just because we learn more about the world, but also regulation is going to put more and more pressure on us to do that. In our world, what we can do is we can use machine learning to be able to understand the activation of campaigns. And I think there's already examples of companies doing that. But again, what's really important is the explainability. So you can explain that if you change the background from red to blue, then you're going to get more activation. Now, it might be that the blue doesn't align with your brand guidelines. So you'd want your brand checker, your task automation to make sure that it aligns with your brand guidelines, or it might offend somebody. So you want your council of responsible AIs to make sure that that content isn't offending anybody. So this is how these things started then interacting together. And one of the campaigns that we did a few years ago was using advanced machine learning to be able to spot essentially hate speech. And what we did is we had it so that when somebody gave some hate speech on Twitter, if somebody was going to retweet that hate speech, we would say that if you retweet this, we will donate some money to actually the thing that you're hating against. So if you hate them and you retweet it, we're actually going to give them money. So this was where we used machine learning to be able to identify hate speech, which is actually much more sophisticated than traditional natural language processing, and we're able to actually manipulate people's behaviors more positively. I already talked yesterday about complex decision making. If you remember the maths, for example, if I've got five people that I need to allocate to five jobs, there are five factorial, five times four times three times two times one, there's 120 possible ways to allocate five people to five jobs. If I've got 15 people, I now have a trillion ways to allocate people to jobs. If I've got 60 people, I now have more possible combinations than there are atoms in the universe. These types of problems exist in many different disguises across our organizations. In our world, the most classic one is essentially cross-channel optimization. You've used your predictive models to predict whether your content is going to get the activation that you want, and now what you want to do is now push that content across your channels to maximize your return given budget constraints and time constraints, and that's one of these large-scale optimization problems. And again, if you're old enough, it used to be called operations research. And finally, human augmentation. So this is again when I would be talking about exoskeletons and cybernetics, things that would make us stronger, faster, and better. One of the things that we're doing is going to sound really creepy, and I don't know if I mentioned this yesterday. One of the things that we're doing with one of the biggest brands in the world is that we take a large-language model and we train that large-language model on the data of the employee, the marketing employee, and then we use that digital twin as we ask you questions. If I put you on this project, will you work well? If I put you on this team, will you thrive? So that's something we're doing with one of the biggest brands in the world, but we know that we can use AI to augment ourselves in many ways. So this is a project that we did with Microsoft where we're enhancing essentially people's ability to see ingredients and the content of goods, and not just taking the words that are on a piece of product and repeating those words, but using generative AI to enrich that text, to describe it in a way that's going to resonate with that person that is visually impaired. So this is where we can use AI to extend and expand our own capabilities. So those are just some of the examples that we've used internally to operate more effectively and also how they can be used in some of our campaigns. And as I said, this is a really nice framework because it helps you understand how to navigate this complex world of safety and ethics and all sorts of stuff. I won't talk about ethics. I won't talk about ethics, actually. There is a confusion that there is... Lots of people calling themselves AI ethicists. I would argue controversially that there's no such thing as AI ethics. So bear with me here. One of the other differences between AI and human beings is that AI's don't create an intent. Human beings create the intent. Their intention is to, I don't know, put the marketing content down multiple channels to maximize reach or to route your vehicles to maximize the number of deliveries or to allocate your workforce to maximize the well-being of your workforce. You have an intent. Then you build a system to try to achieve that intent. Where that system goes wrong, maybe it's bias, for example, I would argue that it is a safety problem. It is not an ethics problem. Ethics is a study of right and wrong. And it's the intent that needs to get scrutinized from an ethical perspective. Let me give you a thought experiment. Imagine you are on the board of a ride hailing company like Uber. And you deploy an algorithm and AI and you say your goal is to maximize prices. That's your goal. And here's all of the data that you can access to be able to make that decision. And it turns out that the AI has access to your battery data on your phone. When your battery data is very low, it turns out that people will spend more money on their ride. Essentially, you're vulnerable and it's exploiting a vulnerability. Now you need to decide as a board member decision maker, is that something you are comfortable with? That is the ethical question. Or do you want to prioritize rides because people are vulnerable? Or do you want to identify vehicles that have charges in them? Or do you want to remove the data altogether so that it doesn't have the ability to exploit people? Which is why it's really important to have algorithms that are explainable. So ethics is the study of right and wrong and it's the intent that needs to get scrutinized. Sometimes AI's don't achieve their goal and it causes harm. Sometimes they overachieve their goal and it also causes harm. And we need to make sure that we build systems that are essentially safe. I'm just going to talk a little bit about innovation. So this is a hierarchy. And hierarchies breed certain types of interesting relationships. This is not a picture of my company, although most people do sell drugs to each other. That's quite common in Satalia. But hierarchies, I would argue, are not conducive for innovation. Remember, the faster you can adapt to a changing world, the more intelligent you are. And adaptation means you need to be able to innovate. And so over the past several years, organizations have been embracing new organizational paradigms, agile and scrum and design thinking. And the principle behind these structures is to enable organizations to adapt more quickly to a changing world. The reason why I talk about this is because I know that we can use AI to sell most stuff. I know that we can use AI to optimize and improve our operations. We're giving you some examples. If you are not using AI to unlock the creative capacity of your workforce, you are dying from the inside. This is something I'm really passionate about. In fact, I would even argue that this technology stack that I've just been through is going to be a commodity over the next five years. We already have access to very cheap compute, tons of data, more data than we need. All of the tools to build AI are essentially free. They're open source. The battleground for companies is not technology. The battleground is talent. How do you motivate, enable talent to innovate? Steve Jobs had a very good definition for innovation. He said, innovation is creativity that ships. And the most important word in that definition for me is the word that. That is generating ideas and getting to the point where somebody's willing to pay for them. And that process is long and hard and painful. The faster you can innovate, the more adaptive you are, the more intelligent you are. There's a very good book by a guy called Dan Pink on innovation. There are three things that motivate people. Autonomy, mastery, and purpose. Autonomy is giving people the freedom to do what they want. Mastery, the ability to become really good at what they want and purpose is giving them something higher to align themselves with. Now the first challenge that organizations have is how do you attract talent in the first place? I work with lots of brands, lots of companies who are trying to build out their own AI teams. And I ask them, where are you on this matrix? How sexy is your brand or your industry? How interesting or challenging are your problems? If you're not sexy and you don't have interesting problems, you're not going to attract talent. If you're sexy and you've got interesting problems, you'll attract talent and they'll stick. And if you're in the other two quadrants, you'll attract talent and they'll churn, they'll leave. And actually you'll hire loads of AI experts. They'll probably build some really good models for you and then they'll be expected to support and maintain those models. And they don't want to do that. That bores them, so they'll end up leaving. And actually one of the biggest challenges that organizations have is people leaving because they're not doing more interesting things. So once you have engaged with talent, you need to then create an organizational structure that enables that talent to thrive. And again, over the past decade or so, we've seen a flattening of organizations, new tools, new organizational paradigms, allow us to create these flat structures. Whilst I'm a big fan of flat structures, my company was only about 150 people when we were acquired by WPP. We never had any fixed managers, hierarchies, people who are always free to work where they want, however they want, or whatever they want. What I want to do is essentially build a decentralized platform and scale it to a planet. I think that everybody should have the ability to work where they want, however they want, whatever, that's a different conversation. But the same types of technologies that we can use now to profile our customers, to understand their behaviors, we can actually use inside our organization. So this is a digital representation of my company. We can see that if one of these two people leave down here, I get a silo, I can identify people who you give feedback to, who you go for inspiration, I can see people's relationships, their skills, I can identify secret lovers in my company, I can identify people who are going to leave the company before they know they're going to leave the company. So this does raise lots of interesting ethical questions. Now, if my employees thought I was using those insights to squeeze more utilization out of them, they wouldn't let me use this data. But it's the intent. The intent is by understanding these insights, we can align people to work in a way that aligns with their values and the values of the company. It's really, really important. I can't emphasize enough that it's talent that is your battleground, it is not technology. In fact, actually organizations, what they're ultimately trying to do is they're trying to create a digital twin of themselves. You might have heard this term. So at the moment organizations are building data lakes and they put Tableau or some analytics layer on top of that data lake and think they have AI. I would argue that's not necessarily a sensible thing to do. I think Gartner said that 80% of data lakes are going to fail because they are not value driven. I'm a big advocate of being able to identify what are the problems, what are the frictions that exist across your organization or your client's organization and how do you then apply the right technologies to address those frictions whilst building your digital transformation over time. And but what ultimately organizations are trying to do is build the digital twin of themselves. At the moment, if you or one of your clients run a marketing campaign that they think is going to increase demand by 10%, do you have the ability to project what will happen across your supply chain? Will your suppliers default on their supply? Do you have enough space in your warehouse? Do you have enough delivery drivers to move the goods? Do you have enough people in your stores to be able to fill that promise to the customer? And the answer is you probably can't answer that question right now. Most supply chains are disconnected. But the promise of a digital twin is to connect all of that together to be able to run simulations. What happens if I do this? And then what do I need to do to optimize my supply chain to fulfill my customer? And what's really exciting for me about this and particularly about the world of marketing is that we might be able to find capacity in our supply chain. Maybe one of our suppliers is giving us a deal or maybe we've got some overstock in a warehouse and use that to create micro marketing campaigns. To do tailored micro marketing campaigns that drive footfall to particular stores or whatever. And actually, I think there are three digital twins. The first digital twin is of your operating model and if we're all currently in media marketing that is creativity, it's production, it's dissemination and then learning about whether that's working or not and adapting very, very quickly. And of course, AI is going to completely optimize that process. It's going to allow us to create content extremely quickly, get it out there, see how it's being received. But it is also going to unlock the creative capacity of our workforce enabling them to come up with much more interesting and dynamic campaigns. So we've got an opportunity in this industry to apply AI to completely optimize our operating model. The second digital twin is our workforce. And again, I can't emphasize this enough. What we need to do is we need to allocate that liquid layer of resources that you have across your operating model so that you can adapt more quickly to a changing world. And then finally, processes, we all have processes onboarding and offboarding and hiring and firing. They're all back office processes. But again, AI is starting to be used to reinvent how that works. I don't know about you, but in some companies, if you want to ask for some money, you ask your manager and that goes up a chain of emails to some CFO that says yes or no, and they don't know what they're saying yes or no to, which is a bad way of making decisions. What GoreTech do is that they make all of their expenses publicly available and then people then self-police each other. And you can use AI to essentially nudge people and steer people's decision making. So AI will help us create these three digital twins. And I think that if you can converge these three digital twins over the next decade, you're going to have a very successful organization. And I won't bore you with this, but yesterday, we talked about this framework for identifying frictions from turning data into information, from extracting insights from that information from to making decisions. We can use these six categories of applications of AI to figure out what are the right applications to then solve those frictions and then we can then use different prioritization criteria to start knocking off those problems and then building value across our organization. I'm very happy to talk about this. Connect with me on LinkedIn. I'm happy to share these slides as well. So again, yesterday just to close, I talked about the future impact of technology. We went through the pestle of singularities. I just want to call out two, which I think is really important for this particular industry. So the first one is the political singularity, which is where we no longer know what is true. And I think that this industry's responsibility to make sure that we create the right infrastructure to authenticate content. If we can authenticate content, then we will mitigate the risks of misinformation bots, causing political ramifications and also challenging the fabric of our reality. And the second one that I want to call out is the legal singularity, which is when surveillance becomes ubiquitous. We know these technologies are now phenomenal at understanding humans, their behaviors. They're even very good at manipulating those behaviors to benefit yourself. And that's an incredibly powerful position to be in. And I believe that this industry need to make sure that we have the right guardrails, the right structures in place to mitigate the accumulation of wealth and control with a small number of people. So on that note, I'm going to close. I just remind you all, it's not good enough just to have a strong profitable business. You need to have a purpose. If you don't have a strong purpose, you're not going to attract customers. You're not going to attract clients and talents. So thank you very much. I'm going to stop there. Thank you.