 Welcome to the AI for Good Global Summit. My guest is Maya Matarik. Good afternoon. Hello. Maya is a professor in computer science at the University of Southern California. Maya, thanks for joining us. It's my pleasure. Tell us, first of all, what did you work all about? Well, I'm trying to do AI for Good. And in particular, I'm trying to do personalized robotic companions for Good, which may seem like a pie-in-the-sky prospect. But in fact, for the last 20 years, we and others have been developing a field called socially-assisted robotics. And the idea is that we can create very inexpensive, these days, almost only $300 US, little robots that help you with whatever your specific challenge is. So your challenge might be that you have a child on the autism spectrum and they need help developing social skills. Or you might be a survivor of a stroke and you need to relearn how to use your arm. Or you might be an elderly person who has Alzheimer's disease and is really losing contact with the world and you just need something to keep you going. So no matter what the individual challenge may be, we're interested in developing these personalized, socially-supportive agents that are really actual physical robots. So that's what we do. OK, interesting. But what sets you aside from your competitors? Well, there are a lot of technologies available today in the form of apps or digital interfaces. But unfortunately, for those technologies, we as humans are wired to be very social in this physical world. And so we are most motivated and rewarded when we interact with other humans. So as much as most of communications today is done through, let's say, messaging, unfortunately, that's actually making us more sad and more depressed. What makes us generally more happy is face-to-face interactions with other humans. But sometimes you don't have that other human there because they're working or sick or unavailable. Or in many cases, people are lonely. And so instead, when that human is not available, that's when we bring in that socially-assisted robot. And you've spent a couple of days here at the summit. How important is it to have a platform like this so that people can talk about AI and the advancements in it? I think it's incredibly important to talk about what's happening in AI right now because this is an amazing, pivotal existential time. And I don't say that easily. I've been in the field for about 30 years now. And I've never thought that. I've never believed it. People have often said, oh, this is it. And I never thought it was. And it wasn't. But this time it really is because we have a technology that is immensely powerful. We don't really understand it. And it has major implications on the future of work, the future of well-being. And so we really need to take it seriously. And you talk about that. And in fact, AI has gathered such a pace in particular in the last year. Where do you see it going in the mid to long term? I'm concerned about the pace of AI development. It is already happening. So it's not a matter of stopping it or slowing it down. But it is a matter of understanding that we have developed something that we don't know how to control and we don't fully understand. And that's the first time ever that this has happened, that level of complexity of a machine. And so what we need to do is we need to really get technologists, the people who are developing this, along with policymakers, along with the general public. And we really need to do regulations sooner rather than later. And then we need to adapt and refine that regulation. I'm actually very concerned about what's happening in the United States, where I come from, where we have the home of most of the companies that are generating these amazing technologies. And the technologies are amazing. And they do have amazing potential for good. But they also have tremendous potential for not so good. And because the companies are not incentivized to regulate and because no one really has figured out how to regulate it, we are very slow in the United States on this front. And we need to move faster. We can make mistakes and fix them along the way. But if we do nothing, that is the biggest mistake. What are your thoughts about the guidelines that you've just spoken about? Obviously, more needs to be done, as we've been hearing from lots of people at the summit. Well, so beyond, of course, we should sit and talk. So beyond just talking, I'm all about doing. This is why I create physical machines. We have to act in this world. And so I think, first and foremost, we should not be releasing things out there to the general public at this point anymore until we understand better how they work. Because we have had too many examples of strange, unexpected behaviors. So that's one problem. And even in some of my work that I'm doing and collaborating with Google, we're working very hard to understand what these behaviors might be in order to corral them. But we should not go and make things more powerful in the public. I was listening to Harare just yesterday at the summit here saying something that I completely agree with, which is there's a difference between development and deployment. It's hard to stop development, but that's different from deployment out in the world in the open when there is no control. So that's one thing that needs to happen. The other thing that needs to happen is much more understanding of what is being publicly deployed. If you're going to let something out, you need to understand how it works. And if you don't understand how it works, you need to pull it back. So for example, something that is quite scary to contemplate, in the United States, we have already had an example of a large language model being used, like chat GPT, to basically put an advertisement on the web to hire people. And those people were hired to do something in the real world without knowing that they were hired by an AI. That's really concerning. So we should worry about these implications. Also, I will add that we should worry, also in the long run, about implications on both democracy and truth and knowing what information is real and what information is fake. And also, implications on work. I'm very concerned about the creative arts. I'm concerned about music, film, digital arts. In what way? Well, I'm concerned because the world of artists has always been difficult. It's getting jobs, getting a gig is very difficult. And if so much more work can be done by AI, then quite well. We can argue that it's maybe derivative, but so much human work is available for AI to be trained on that what AI creates is quite impressive. And it's cheap. And so if we're just hiring that, what happens to human artists? And we could say this will help human artists, but nonetheless, economies work the way they work. The cheaper, good enough art will be bought and used before more expensive human art. So there is a real concern, I think, that we should think about there. Maya Matarik, thank you so much for your insights. Thank you for your time. Thank you. And more to come from the AI for Good Global Summit here in Geneva.