 Hello and welcome to the AI for Good Global Summit. Our guest today is Will Jackson, he's CEO and founder of Engineered Arts. Welcome. Thank you. So Will, you told me earlier you built your first serious robot aged 13. Yeah, that's right. So I was always very hands-on with materials. I think my parents gave me a workbench when I was four years old and so I spent my childhood making. As soon as microcomputers became available, I taught myself the code and I wanted to bring that code into the world. So it was embodying that code and I made a simple robot that was just two wheels. Tell me what you did today. What's your work involved? Today I run a company that's now around 50 people. We have three locations and we build humanoid robots and they're primarily designed for interaction with people and of course humanoid robots that interact with people are an embodiment for AI. So that's the connection with AI for good. Give me some examples of what you do. So we make robots for public spaces, so museums, science museums, theme parks, visitor attractions. So you can give an AI a character, a personality. It can talk to people. It can entertain people. It can be very informative. It's a lot of fun. We also make robots that are used for exploration for scientific research. So we supply a number of universities. There is a lot of activity in the AI area, but if you're interested in the human interface, the embodiment of that AI, then there are not many choices. So we provide our robots for research labs. The other area we work in is with corporates and it's generally communications within corporations and to their customers using AI-enabled robots to present information. Okay, thank you. And you've spent the last couple of days here at the Global Summit. What's the takeaway do you feel for you? It's interesting to see so much activity and so much attention in this space. And we all know that it's received a lot of media attention. There are very rapid developments. One sense I do get is there's quite a bit of confusion. I think we could have some more transparency and some more honesty. There are things that AI can do fantastically well. And there are things that AI cannot do yet. And there are things that may not be possible for 20 years. And it can get a little bit over, over-optimistic, I would say, the conversation. And I think that could be harmful. We need to kind of temper expectations. We need to look inside the black box. And it would be good if we were really talking about how the language models work, how does computer vision work, how do predictive models work, and try. The difficulty is it's a very complex subject. But I think if people could understand a little bit more, they would probably have less fear and make better use of the technology. Okay. Now you said to me earlier, I wouldn't be in AI if it wasn't for good. No, no. What are some of the benefits then? I think being able to interact with people. So what we're doing is building the interface to AI. So it's the embodiment of AI. So we're all used to using a screen and a keyboard. Now the keyboard, its heritage is a typewriter from 200 years ago. Screens go back to the 1940s and 50s. So we're using these very arcane interfaces. What if we could interface with technology in the way that you and I are talking now? What if that nod of the head that you made that an AI can understand it? So what we're interested in is using AI to improve the interface between us and our technology. To empower us to use our technology in a much more simple, creative way. So you wouldn't have to be a technical guru or a genius to use an AI-empowered robot. You could just talk to it in the way you would talk to another person. So I see enormous opportunities for good then. And in particular, you're interested in how it can promote quality education? Education is kind of the foundation of everything. So I think education is not just knowledge. Education is discovery. It's about finding out about the world, discovering physical phenomena within the world, exploring the world, seeing it for what it really is. And if AI can contribute to that, I think that's a huge positive. And I really think it can. Can you give me some examples of? A concrete example. AI is fantastically good at just retrieving knowledge. So if you want an encyclopedic knowledge of the world, this is what AI does very, very well at the moment. So it can hold more information than any other person can hold. So as a way of retrieving information, it's really great. Does that form a quality education in its own right? No, it doesn't. But it's the start. It can give you background. It can give you information. Are there problems? Yes, not everything that a language model will produce is true. I'll give you a very simple and funny example. So our robot is named Amica. And actually, it was a corruption of the Latin Amica friend. That's the origin. And we asked the robot itself, where did your name come from? No, we'd never trained it on this. And it made its own version. And it said it stands for autonomous, mechanical, emotional, cybernetic assistant, which sounded tremendously convincing. And we were, well, that's wonderful, but completely untrue. So this is called hallucination in the world of language models. So there are dangers. And we have to look at how can we make an AI that's truthful? How can we make an AI that's transparent? These are some of the challenges. Sure. And if I can just continue with that theme of the challenges of AI, how do you ensure that the technology that you're developing and others is ethically safe? That is a huge challenge. So if I give another concrete example, what do you do with the memory of an AI? So if we've been having an AI conversational interaction, and I've told the AI some personal information, or maybe not even very personal, it's just recognized who I am, where I was at a particular time. Now I could be sensitive about that information. It's a kind of GDPR problem as well. So where is that data stored? And how do we explicitly get permission? Does an AI agent have to say every sentence, do you remind if I remember that? Now that could be pretty tedious. So we're going to have to come up with some mechanisms, some transparency. What we've decided in the short term is best that all data from interactions remains on a local robot itself. So it never goes beyond, it doesn't go into the cloud, it doesn't get stored on a remote server. So in a very simplistic way, you could switch off the robot, take out its hard disk and destroy it and your data would be gone. So at least you have that simple mechanical recourse to solve the problem. That's not going to work as a scalable solution. How do we protect data? How do we have good governance over that data? These are big problems and they need to be solved. We could talk all day but we have to do it there. Thank you so much. Will Jackson see you all and find our engineer darts. Thank you for your time. And more to come from the AI for Good Global Summit coming up.