 Hello, very welcome to this new episode of the Robosly Beneficial Podcast. Today we're going to discuss an entry from the Stanford Encyclopedia of Philosophy called Ethics of Artificial Intelligence and Robotics. So it's quite recent because we in the end of April 30, 2020, and it's quite a nice overview of many important topics in ethics of AI. So the article starts with some background on the field of AI, and it stresses the fact that it's becoming more and more important, but it's not always clear what are the priorities in terms of AI ethics. It has concerns that maybe we sometimes focus on wrong things, and it argues that it's going to try to highlight the most important aspects of AI ethics, and essentially it identified 10 main debates, as it is called, and then you can go through them one by one. So the first one was something that is discussed very widely. It's the prime of privacy and surveillance, and this is like very, well, very much already debated, and it's very much about today's algorithms. And well, the basic idea is that today if you're using Facebook or Google, then Facebook and Google and others are collecting a lot of information about what to do. It can be more or less personal. And yeah, so there's this concern that this can allow mass surveillance, especially if the government has access to such data, which is likely to be the case in practice, at least in some cases. So then there's this problem of how do we design algorithms that are resilient to such issues? Yeah, concerning data, the article also mentioned the question of the ownership of personal data. And this is also something that raises a lot of discussion. The way things went is that somehow government and a big institution did not anticipate well enough the importance of data and the fact that this social network would be recording things about everyone and make their business out of this. And we can say today that we lost ownership of our personal data because it's out there and we are not getting paid from that. But in the opposite, these big companies used this to optimize their algorithms to better target ads. And another thing like this. It's something that Yuval Ahavi often mentioned that he even says that in some cases, these algorithms know us better than we know ourselves because they have been able to collect so much information about us and also the scale of this algorithm because they have information about more than millions, even several billions of people. They can find patterns in that and make deductions that we as individuals with our small observation space, we can't. He often says the example of the fact that the Coca-Cola company was aware that he was gay before he himself was aware. And he explained the example of what ad was targeted for him. What I found most interesting was the second point raised by the in the article, which is about manipulative behaviors by algorithms. So it starts from these algorithms that collect so much data about us that know us, let's say, better than we know ourselves, but also have incentives to change us, to manipulate us. So why do they have these incentives? So simply YouTube algorithm is trying to maximize the time you spend on YouTube, Facebook, and become a system is trying to maximize the time you spend on Facebook. And to achieve this, it just is not sufficient to simply show you interesting content. If the algorithms can choose to show you content that will change you into someone that is more likely to come back to YouTube or change you into someone that's more likely to come back to Facebook, that's these these smart algorithms that can test things on millions of people, they will necessarily find this kind of content that can change you. And so that's where the concept of manipulation, manipulative behaviors by algorithms come from. Maybe just to stress the fact that here, often when we hear manipulation, we tend to think of malice and malicious intentions. So here manipulation is not necessarily intended, the intended objective, but it's just inevitable. If I just put in on an objective function to maximize watch time, and the fastest way to maximize watch time is to put content that makes you come back more often and changes your own preferences, then the algorithm without real intention will manipulate you. Like just like to put a caveat because like most people when they hear manipulation, they think of intentions. Yeah. The first outcome of this is simply addictive content. So this is something that's highly recommended on social media and highly shared. Cigarettes do not have an intention to manipulate you but smoking the first cigarettes makes you addicted and makes you willing to smoke another cigarette. Maybe addiction is better than manipulation just to convey the like the real nature of what's happening. Yeah. Addiction is not the only kind of manipulation you can think of. But yeah, if you think of the recommender systems themselves, the algorithm is not like trying to make sure you become an extremist or something like this. It's just trying to maximize the engagement time. Yeah, it's just that extremists engage more with the platform. As the side effects of addiction. Yeah. And I think it's important to keep in mind that many of the problems of algorithms or other side effects than malicious intentions or at least a lot of them are side effects. The complexity of like the tree of all possible side effects is so complex and so intractable that maybe the most economic way of talking about them is just to say manipulation. Yeah. Another thing I wanted to raise is that like I feel like a lot of the words used in the entry like privacy, surveillance, manipulation are highly connoted. Like it feels like for instance, privacy is definitely good, surveillance is definitely bad. Manipulation is definitely bad. We're going to talk afterwards about opacity and things like biases. But I think it's more complex than this. Like for instance, typically if you can somehow better understand people, mental health conditions using this data, it has to be done well. And it's like very important that this can be that this is done very, very well. But there's also opportunities. Also, like you can provide more customized recommendations. Like the COVID situation also highlights the fact that like in many countries, at least in France, like very contagious diseases have a special legal framing where essentially have a duty to report that you are highly contagious if you are. And so there can be reasons. It's very touchy and has to be done well. And it's very, very hard. But there can be reasons to protect most people to understand some of the important risks. In the case of like influencing people through algorithms. Well, that's what we are doing right now. Like we're trying to influence you into thinking that AI fix is important. So this is a kind of manipulation of maybe we would rather create education. But this can be also an impact like for instance, Tristan Harris from your undivided attention cast often talks about fighting climate change by recommending quality contents about climate change and raising awareness through recommender algorithms. So that would be a kind of manipulation that could be for something much more important, which is the future of climate and this of mankind. Yeah, so clearly bad manipulation is bad. But it's not all clear that all sorts of like trying to influence people or bad. Yeah, it's not very clear what non manipulation would be because if you go on social media and you see content, then it will change you. Maybe this content will make you stay the person you are without changing you. But this is one part of the platform. What the what the algorithm selects makes you like this decides that you will not change or decide that you will change in a certain way. So I would say as long as there is a content recommendation, it's hard to define what non manipulative content recommendation would be. Okay, good. Go on to the next point. So the next chapters in the article concern the question of biases and it's something highly discussed concerning artificial intelligence. There were some cases, for example, the one we discussed was simply the case of the what comes out when you research for the keyword CEO on a Google image. And a long time ago, it showed only photos of male CEO. And this was considered an important bias problem of the algorithms. Today, it's fairly fixed. But so it's fixed on Google, but not really on other search engines. We did the experiments and a lot of caveats. It's not clearly fixed. Yeah, it's not clearly fixed. And actually, we did the experiment the other day. And both I did the experiment from my laptop in perfect navigation mode. And well, if you search if I want to search a female CEO on Google, I did have 20 30% of the female CEO images. But on other websites, like bing.com, dot go, try the Ecosia quant, things like this. No, they were only male CEOs images. And the only other exception was Yahoo images, which I found interesting. But yeah, it's not that clear. It's fixed. And it's quite hard to fix any so famous examples. So that's why Google fixed it. But maybe there are no subtle cases where it wasn't fixed. Yeah, I expect that this is a complicated problem to know all all the areas in which these kind of things should be fixed. I'm pretty sure Google hasn't done it well enough already. Yeah. And there's the problem of doing this. Because I don't know how Google did this. But one way to do this is by hand essentially, and see like one of these very contentious, well important bias issues and to detect them and then to fix them by hand. I don't know if what this is what Google did, probably small as this, usually you identify by what the way de-biasing you off like this de-biasing algorithms will really work is that you identify two classes of people, we want them to be equally represented, represented in some in some outputs and you correct for this. But this is usually done by hand and it's a bit ad hoc and there's no systematic way to do this. And this means that we can only de-biase the cases that we have been able to identify. And if you see by we within humans, it's arguably quite limited. So I think just the research for more systematic de-biasing should be done. And the other thing we discussed is that I actually don't really like the word de-biasing because it suggests that there was a clear bias that we want to that needed to be fixed and then everything is solved. And in practice, like many of the if you look at empirical data, they the way the world is is arguably in a certain way is very biased. Like you just look at for CEOs or mathematicians or computer scientists. Unfortunately, computer scientists like the three of us are all made. So it's very in a certain way. And probably what we want is instead of representing how the world is, at least when people search for computer scientists on Google, maybe we want to show them more something closer to what we would want the world to be like. And this would mean that we would be adding biases to the description of the world. And maybe that's something that we actually want. I think like we made a poll and essentially everybody agrees that this is something that we should do like a moment of reading group. So at least a lot of us want to actually bias the representation of the data in favor of a view of the world as we want and would want it to be rather than how it currently is. What do you think of people saying that algorithms are not biased, but is the training data that is biased? In a sense, I guess the question is biased with respect to what when you say biased, you suggest that there's something you want to achieve and the bias drives you away from this thing you want to achieve. Now the thing you want to achieve could be a direct description of the world. And in this case, you can have data that are biased with respect to the description of the world. That's probably what occurs whenever you go on social media or when you read the media that shows a biased view of how the world is. But arguably, with the example I gave, we even want to create a bias with respect to how the world is towards how we think the world should be or at least towards how what we would consider as more desirable recommendations. And this would correspond to adding biases. Yeah, so quite similar to what we just said about manipulative behaviors that what to robustly beneficial algorithms would be manipulative, but in a good way, in a way that we desire them to be. And same thing for a recommendation engine or search engines, they should be biased in a direction that we find beneficial. I think it raises a more fundamental question, which is the question of what is an ethical decision or recommendation more generally. And I think our view here is that it's one that's robustly beneficial to something like mankind or life or something like this. It's very hard to go into the details, but I think it's useful to keep this in mind as a more fundamental terminal goal that we may have in mind. And then there are these more instrumental goals, like transparency or de-biasing when it's biased with respect to how the world is or something like this. These are instrumental goals, but they are just part of what we really want to argue. The next part in the article was concerning transparency, which is a topic we already talked about in previous episodes of the podcast. The idea of transparency is that because these algorithms take a huge number of decisions, and as we have discussed so far, these decisions have ethical implications. It's completely crazy that these most influential algorithms are totally unknown to us, to the public, to the government, and to most people while they have such a large impact on us. And so requiring these algorithms to be more transparent in the way they are doing things is definitely an ethical question for our own good, for the good of everyone. These algorithms need to be made better, and if they are more transparent, we would somehow see the way they can fail and improve them. I think what we already discussed is a lot, but transparency has a lot of advantages. It can be verified, it can be audited, it can be corrected, it can be tested. Yeah, but again we also raised concerns about the full transparency. Maybe it could enable malicious actors to more easily take advantage of the system, and the example of spam filters is an interesting one. Yeah, I think this needs to be thought through, and it's not that simple. Yeah, we can refer to the previous podcast if you're interested in the topic. The next point in the article is something that we never talked about before, the question about robots. So personally, I don't find it to be a very interesting topic because I think this, so specifically talking about mostly humanoid robots or robots that you live with at home, I think it's something that because it's not extremely scalable, it won't happen soon, and it will take time, but maybe I'm wrong. In the article they mentioned two specific examples. The first one is care robots for elderly. This thing actually, I know it's most likely coming, I know people doing research on developing these kind of robots. And the second example is the example of sex robots. While reading this part, I was mostly scared about the fact that this engagement we would have with robots that look more alive compared to simple screen of a laptop or screen of our phones would simply increase even maybe a, what do you say, would multiply the emotional engagement we have with these algorithms. Already the social network to come out of system, they try to be content that is emotionally strong for us and creates either addiction or anger and definitely look for the content and that make us interact more, make us engage more with these small things like phone and laptops. And if a robot has access to this kind of information, to this kind of data, to this kind of algorithms to select content for us, then I'm afraid they would simply be a lot more addictive, a lot more engaging, a lot more emotionally disrupting for society. Yeah, yeah, so like you, I don't think that robots are as concerning in terms of AI ethics than social, like very common assistance, for instance, that are much more scalable and affect billions of people with very customized, right, it's already big. Yeah, I think it's a very interesting, it's still interesting to think about the robots as themselves. I'm not pretty convinced by the greater engagement thing. Well, I think it's I think it's an interest, it's probably also a possibility, but like I feel like people can react a lot to things that are very virtual. So at some point, for some, there was the example of Tamagotchi that was given. So Tamagotchi is like really this stupid, not stupid, but this very simple game that a lot of people had where we had to feed these virtual pets and take care of it. And people, like many people engage with this, I think a lot of people also have some, quite some empathy for Pikachu they have on the Pokemon game. And maybe there can be applications on the phone that can be very, they can really work on this and serve as care companions or maybe not sex companions. Yeah, I'm not sure how critical the Android part is to engagement and emotions. There's probably, talks about robots and agents that are often suggest to be embodied as a robot. I believe it's a, there is like an over-representation of the team robots in AI ethics, AI safety, especially outside and machine learning researchers. And I also feel like the danger comes from decentralized, non-embodied, like I would rather fear a recommender system than a robot that they can localize in front of me. If you just like, think of in terms of fantasized science-fixie scenarios, it's better to deal with the localized, embodied, compact, self-contained agents than to deal with the decentralized, non-embodied, like something as stupid recommender system. And I think we should keep pushing on this direction where people care more about non-embodied, decentralized, and robust agents because that could not be localized and contained and confined. And that's why there is often this misunderstanding that there is no risk coming from AI because we can simply, if we see that it's going bad, pull off the plug. You can't involve the plug of a recommender system. Yeah, so. Can you see if it's going bad? Which plugs? All the plugs are on the work. I always give this like, yeah, just addiction, like try to switch off the algorithm that makes your teenager kid addicted to a social network. You can't. Yeah, sometimes even if it's a Game Boy, I just have one button to press. Well, at least the Game Boy is like the Game Boy. If I didn't have a Game Boy, but I don't think it was connected to any network. Yeah. If you kill the single Game Boy, the kid loses all the content. Yeah, but I mean, if your parent tried to tell your kid to kill, but just to switch off, it's Game Boy can be very, very hard for him, for the kid. It's not because the Game Boy is a very sophisticated kid or machine. It's just like, it's addictive. Imagine, for example, imagine you have a teenager that has a very complex relationship to his or her social media presence. Then even like as a parent, you can't kill the social media presence by just killing the smartphone, because everything is backed up and the kid could just find another smartphone and get back to the addictive or to the toxic community or to whatever was not very good for their mental health. Yeah, maybe one exception to this, and this is another section of the article, is Autonomous Cars and Autonomous Drones. Especially Autonomous Drones. We talked about this also in the previous episode, but the important difference is that there can be a lot of them. You can imagine a fleet of millions of kilograms of drones, and now each of them is localized, but there are millions of them. It's harder to protect against them. I think this is a much bigger concern than human-looking androids. Yeah, I agree. So in another section of the paper, they also mentioned the concept of a employment and automation. So it's something that's often feared concerning artificial intelligence that because they allow more automation, they allow for jobs to be replaced. From what kind of shock from society can we expect? So one that has been observed is that between the year 1800 and the year 2000, it used to be that most of the population, around 60%, was working in farms, but now we are in a completely different society where less than 5% of the population works in farms. The question concerning development of artificial intelligence is whether similar shocks will occur and change the landscape of the job market as safe and healthy as that. And because it's quite an uncertain area, how can we try to anticipate it correctly, prepare for it, and make sure that things happen in a good, desirable way? All the big questions. Yeah, I think also the current COVID situation is really affecting the way people work and employment is going to be an issue. So I think that we need to consider solutions for at least the possibility of an increase of employment. Yeah, one of the main concerns also is increase in employment and also increase of inequalities. The already rich and powerful companies that will then be able to automate more, to produce more and use less workers, the higher-less people for producing the same thing would simply become more and more productive. In the end, you can think that all of these technologies are just allowing us to produce abundance, to produce a lot more with much less work. This has been a trend since the dawn of all mankind. So we have all of these good things that are coming in and maybe we should think of how to redistribute all of this wealth in the best possible way for the good of all of mankind. Yeah, this is definitely an important topic. So they mentioned in the article what they call the age of leisure, which would be a time when we automate it sufficiently that most people don't need to work anymore and everything from food center to entertainment and leisure are all provided to everyone, no matter what you do. And this is one of the huge reasons to, when we talk about AI risk, some people think of stopping to do research on AI and simply would absorb the risk, but we would lose this opportunity to do extremely large amount of good from this kind of technology and this is extremely valuable. Yeah, and life without computers or computing. The one we know of at least is like back in the beginning of the 20th century and was not that pretty for most people back then. Okay, do you want to extend more on this topic of singularity? Yeah, we can discuss a little bit the problem of singularity. So what the idea of the singularity would be that some AI systems become extreme like super intelligent and they were very very capable and they for instance are able to produce better AIs than researchers in AI could. They can solve any task, at least information processing task better than any human can. And if this happens, then well, we have to expect a lot of changes in the way we organize many things in our society. I mean, a lot of jobs these days are about problem solving or solving some changes, organizing the information, making decisions. And so if you have an algorithm that's able to do all of these better than any human, it's definitely going to completely upset the way we live. And I think it's a case where just like as algorithms have become more powerful, there are greater opportunities but also greater risks. And in a sense like the volatility of what can happen is increasing and it's becoming more and more important to do moral philosophy and to make sure that things are really going, especially like robust moral philosophy, to make sure that things have a very high probability of being in the right direction. Yeah, and I think just the risk of a singularity should increase our concern to do moral philosophy on a deadline and making sure we understand what ought to be done, not only by every human, but also by any system in the organization and particularly in machine. Great, that's complete. Let's see how something to add. No, so next week we discuss what? The online fight between anti-vaxxers and pro-vaxxers, right? Yep. So it's a good topic because when we wrote the book, many people told us like, I don't know for labors, when I gave talks and I used the anti-vaxxing example as something we don't want to happen on social media, people told me like, how much is the impact of this? How many people are really dying from this? I remember like a talk at Berkeley and I was asked why I'm like insisting on an example that has so little impact. I hope now the current COVID situation changes this perception of how dangerous the anti-vaccination movement can be, especially that like now we're heading for, we're heading for a big backlash once the vaccines will be developed. Like the anti, like there are reports that the anti-vaccine movement for COVID is already, like it's growing faster than the research on a vaccine. There's a bit of community around the resistance, the COVID vaccine that is getting ready faster than the researcher are getting ready to deploy a vaccine. And yeah, so yeah, clearly that's something we don't want to recommend our systems to be bad acts. So we don't know what the recommender system should do, but at least they shouldn't do something that is obviously bad. Like when you search, like are vaccines good for your health? And then you provide, like even if you provide half the answers, yes and half the answers, no, that's clearly a bad thing because you won't find half the researchers on vaccines and virology and viruses saying the vaccines are bad. You would find maybe two or three among tens of thousands of respected researchers that went through all once they retire and start having some strange positions on vaccines. But yeah, recommender systems are clearly not good on this topic. Like this is, this is a topic where you can very easily say that recommender systems are not good. Right. So thank you. And I don't know if you want to conclude. No, no, that was a conclusion. So thanks for listening to the podcast and see you next week.