 Welcome to the Roversley Beneficial Podcast. Today we will talk about a paper called WeBuild AI, a participatory framework for algorithmic governance. It originated from CNU. Carnegie Mellon University. Yeah. Which is a university in the US, one of the main universities in computer science. And this is my favorite paper of 2019. Like I was extremely excited when I discovered this paper. I think it's extremely well written. The introduction is really amazing. And the content of the paper, what they did and all is extremely complete. Much more complete, I'd say, than everything else that I've seen so far. I think it's really like a great framework to include a lot of the difficulties of algorithmic governance. Which by that, what they mean is like, well, the design of governing algorithms, I guess. So what they think about is like many algorithms that are used for many users. This can be like a garage collection for a city or stuff like this. But they also have in mind, like there was one sentence in particular which was quite, they gave quite some emphasis on this sentence about the role of social media algorithms, which I found very exciting, like that they had this in mind. But the use case they present is also a very cool one. Like it's about food donation. And essentially the problem of food donation is you have these food donors who have leftovers, typically supermarkets who have leftovers. And they want to give it away as a donation. But if you address this company who wants to give food away, well, you don't know to whom to give this food away. And so there's this intermediary NGO called 412 Food Rescue, I think, something like this. That is essentially organizing the dispatch of the food to give it to organizations in need of food. And you can think that is, well, like first of all it's a very nice application, like it's very humanitarian application. But it does raise a lot of ethical dilemmas that may not be like essentially every time, like there's a food donor that say, I want to give this away, there's this question of who will this food go to? And you can think of each decision as a small ethical dilemma. Maybe it's not exactly a question of life or death, but this happens a lot every day. And so the accumulation of this makes it an important challenge. And what's interesting is that so far this challenge was resolved by a human worker who was basically choosing to whom the food is going to go to. But what they wanted to design is an algorithmic solution because what they had the intuition that the algorithmic solution could improve both efficiency and equity of fairness. I think they use both terms. Well, by efficiency what they mean is like the food being transported on the minimal distance. So you can think of this as a requirement for less gas consumption for instance, but also like to optimize the human resources because the dispatch is done by humans. And if they can dispatch more food in a sense or with less work then there's gain. Yes, also these decisions, they were taken by humans and they report in paper that each volunteer had to go through 100 questions per day of this food to be dispatched where and this takes too much time. I imagine that it takes at least half a minute to make the decision of that food to be dispatched to this place based on all these features that are important for this decision. So it's nearly one hour per volunteer per day was required and it's also not scalable. If suddenly we want to do 10 times as much decisions then we will definitely need 10 times as much humans to do it. So that's also one place where replacing the human decision with an algorithm if the algorithm takes as good decision as the human is definitely already one way to do more good. Yeah, that's a, and also there's definitely at this point there's also the question of fairness because if the human is taking the decision maybe he's going to have some bias. Probably he's going to have some bias. One of the bias they saw is that a very large proportion of the donations were given to a very small proportion of the possible receivers. And it was not very spread through. And somehow this is something that when they ask participants later this is not something the participant wanted. But it's something that they were doing in practice without truly realizing that it was happening this way. Yeah, it's very hard to think about the things you're not able to think about. Yeah, that's very interesting. And in a sense, the whole goal at least was to design an algorithm that was better, both more efficient and both more fair. And, but then this way is the question of what does it mean to be fair? And we all have this intuitive feeling of what fairness might mean. But it's not that easy. And actually there's a lot of research about what we mean by fairness. There's this concept called individual fairness. There's all the concept called group fairness. And there's all the concepts of fairness. And what's, and the approach so far to the research on fairness, which there has been quite a lot of. The approach so far is that essentially some scientist is going to try to formalize this concept and he's going to come up with one theoretical definition. And then he's going to develop algorithms too so that they can be fair in this sense. But yeah, is this the right definition of fairness? And thus, will the algorithms actually be relevant for the actual definition of fairness? It's not that clear. And what's more, in fact, society shows that the concept of fairness is not like, there's not one concept of fairness and different concepts of fairness. And different people have different concepts of fairness. This varies culturally. And so one fair solution maybe designed in the Silicon Valley may not be fair in the sense of people in Switzerland and may not be fair in the sense of China or whatever. And so one idea here, which is a very interesting idea, is to adapt the concept of fairness to the actual users. So you might call it, well, I like to call it user-driven ethics as opposed to software developer-driven ethics, I guess, or company-driven ethics. And it sounds like it should be more relevant. It's, well, in a sense, that's maybe what most of us feel intuitively is better, like to include people's notion of fairness, but it's also more efficient. Like even if you're a company that just wants to deploy your algorithm and what you want for the algorithm to be used, and if you want the algorithm to be used, even if your goal is to make money, if you want your algorithm to be used, it's actually probably better to give it the ethics of the people who are actually going to use it than to impose your own ethics. Maybe it's a good time to introduce, to describe the framework they had in the paper. So the framework they propose is in, we can say two parts. One part is where each individual interacts with an algorithm, and the goal of this algorithm is to learn the preference of each individual. Each one has one specific, let's say parameterized algorithms that we work for them later. And then the second part is a part where all the algorithms from each individual would together, for example, make votes to take decision collectively. And this is how they implemented what you were describing as how to make algorithms more fair in their solution. It's a get somehow everyone's opinions all together and the way to get this fast and at scale to make more decisions than we could if we were just querying humans is that the human trend and algorithms and then the algorithms can make a thousand of votes per minute. Yeah, yeah, because you can imagine that they said like each volunteer has a hundred choices to make every day. She multiplied by the number of volunteers that makes, I don't know what else. It says thousands of decisions to be made every day. And maybe ideally if you think that democracy is the ideal for this kind of decisions, then you would want to make a democratic vote for each decision. But this would take like the day would be over before you start having enough answers. So you don't want to actually do a democratic vote on each choice. And the idea is to replace each voter by what I like to call an algorithmic representatives, some algorithm that will play the role of the voter and hopefully vote like the voter. And if you think about this, this is very similar to representative democracy which is the norm in many countries in the world these days. The problem of writing laws, for instance, is that yeah, we could try to do it purely democratically, but if we try to write a text of law all together, like eight million of Swiss people writing together, it would be a huge mess. Partly because also like most people just like don't understand everything or I surely don't understand enough of the law to be writing laws. And instead what we're doing is we have representatives that write the laws for us, like we vote for the representatives. But in the representative democracy, these representatives are humans and they have to be, but there are not that many representatives for the whole population, which means that it's not that clear that the representative will actually represent adequately the person who voted for this representative. When you're voting for a presidential candidate, maybe you don't fully adopt all of the ideas of the probably you don't fully adopt all the ideas of the presidential candidate. So the presidential candidate does not fully represent what you want. And the hope here is that if you use algorithmic representatives, it can be fully customized so that it actually really tries to to learn your preferences and according to your preferences. Yeah, so the way they did this in the framework is that the user and their algorithmic representative would be iterating over time. So for example, so there are two ways to be the algorithmic representative. One was using machine learning that observed comparisons that the user would input. And the second one was using a purely rule-based where the user would have to write down the parameters of the algorithms and decide on what computation to make decisions. And in any case, the algorithms would be trained by the users. Then the users would see what result the algorithm gives. So on this comparison, the algorithms gave a search result. Is that in line with your opinion? And if not, then the user was able to continue training its own algorithm. And also iterating between the two methods, it would give the users a clearer view on how this thing works. Yeah, maybe to clarify what we mean by a comparison is that a use case like typically you have this donor that wants to get food and here are two options, which one do you think is better? And you have all the features or the parameters of the different options. Like maybe on this first choice, like you have data, like the last time it was given food, the distance to travel, how big the organization is. Like also I think about the economic status of the region, is it an underdeveloped area of the city and stuff like this? And based on this, like every time you say, well, I prefer to give it to this or to this. And then you have this machine learning algorithms, which is not a very sophisticated, it's like logistic regression. It's what we've discussed a bit of this in a previous podcast about preference learning from comparisons. But yeah, you have this that learns essentially from your comparisons and the other ways you try to describe how you think about these things. Yeah, and so once you have these algorithmic representatives, just like in democracy, the representatives will then vote whenever there's a dilemma or whenever there's a new case of a food donor that wants to give away food. The algorithmic representatives will vote to determine where the food will go to. And the way they vote is through a voting system. The voting system they used in the paper is called the border voting system. The border voting system is a system in which you rank the different options. You say, well, this is my favorite option, this is my second favorite, this is my third favorite and so on. Like I don't remember, but maybe they rank like the top 10 or something like this. And then you give points, like 10 points to the first option, nine points to the second, and so on. And then you just add up the scores and the options with the most votes, the highest number of points wins. So this is the voting system used, for instance, for the Balmodor, the golden ball award in FreeBall. And so the advantage of this is that it's very simple. So most people, well, if you don't know anything about social choice theory, I guess it sounds quite comparing. It sounds maybe like the way to go. Well, there are caveats about this voting system and we can talk about this later on. But what's nice is that most people felt that it was fair and that it was something that would be willing to go with. And also, like just like in the case of algorithmic representatives, there was an interface to explain to the users what were the impacts of the votes of their representatives. Which is something, if you think about it, it's quite nice. We don't really have that for representative democracy, at least in most countries in the world. And this is also crucial for people to gain trust in the fact that the system actually took into account what they prefer and that it's going to... Yeah, there's a lot of transparency around the way that's very, really nice and that builds trust so that people are actually using more and more in this kind of tools. And that was very, really nice. Yeah, as we discussed, so this one is very nice for transparency and it can be trusted better in this sense. One way in which it lacks is that it doesn't have the property of not being possible to gain the system. So by gaming, it would be anyone that, instead of voting, it's true preferences who would vote something different than it's true preferences for its own advantage. So for example, if you know that there are two options that are competing to be the best and you prefer one against the other, then your optimal vote would be to put your preference at the first rank. And even though it's not your last rank preference, the second best option to put it at the very last rank to increase your chances of influencing the vote towards your true preference. Yeah, it may seem far-fetched, but when I first thought about these ideas, I was like, well, if there are algorithmic representatives, then, well, there's no more of this problem of gaming the social choice. But when I thought more about this and especially reading one of the interviews, because they did the study with the actual organizations and the food donors and the food recipients and the volunteers all participated in the voting system and they all had the algorithmic representatives. And they also decided to share the data. You knew what the volunteers mostly prefer, what the others prefer. And there was this interview, where they also interviewed all the participants. And there was this interview of one of the participants. I don't remember in which category she was, but she was saying something like, it seems that people did not give enough fairness, importance to fairness to me. Like she was thinking that people should be giving more importance to fairness. And probably she did give a lot of importance to fairness, especially helping people in areas that are underdeveloped. And you could think that if she thought longer about this, well, I don't know if she already did, but you can imagine that if she thought longer about this, then she would train more and more, maybe her representatives to always pick the best option, the option that's most underdeveloped area. And then to give, to put in, so that would be in the first place. And then on the second, third, fourth place and so on, she would put everything that's very, very remote, very far, like stupid recommendations, so that in the end, like this top choice of hers gets 10 points more than the actual credible alternatives. Not that you say it, it's totally feasible with the system that they present in the paper. Yeah, yeah, yeah, I Q-point it out yesterday or two days ago, that if you only use the machine learning model, then it may be a bit hard to come up with this model. Actually, I think it's not because for features like how long the car should drive, everyone has a down slope, the less driving, the better. So you would very easily anticipate everyone else to make the bad choices in the front. You just make a positive so for driving further is better. And then you achieve exactly what you described that. Your choice that is, you just care about fairness more than others. Your choice about fairness would be first, you put the very high score for this and everything else you put the reverse slope compared to what is a normal thing. And then it would be a good approximation of the optimal strategy to gain the system. Yeah, yeah. Maybe we should not say it. But no, I think like it's like saying like you build this system and it's supposed to be secure and you're not saying anything about your system. Like security by opaqueness like as a black box is not really safe. Like I think it's better to especially like we are at the beginning of this. It's really important to better understand with all the flaws in particular of transparency here. I think transparency is highly valued. It's a bit weird because transparency is highly valued but privacy is also highly valued. So they seem a bit at least sometimes in conflict. And it's not easy. Like I think most of these systems like I would push rather for more transparency because it's already very hard to build these systems and I think it's better to analyze them quickly if possible mathematically. But you also have to be aware that because of this people are going to try to exploit the vulnerabilities of the system. And so instead of just being transparent and assuming that everyone is going to be nice. I think the better option is to be transparent and to assume that everybody is not going to be nice and to be resilient to this. That's a topic we discussed in the first episode when we discussed about probing black boxes. One disadvantage of making black box transparent is that people would have an easier way to game the black boxes. Yeah, and this gives then a lot of importance to so-called strategy proof social choice. Social choice is like this voting system theory. And the do-border system is not strategy proof in the sense that people can game it. But there are like other voting systems that exist that are more resilient to this kind of attacks. And it was funny because for the whole story. So I actually worked on one of these systems on the randomized Condorcet voting system. And when I discovered it, I was very excited about this. So I really promoted it and was definitely my favorite voting system. But then lately, because of AI and stuff like this, I was thinking that maybe it's not the right way to go because the randomized Condorcet voting system is very good for terms of strategy proofness. But it's not really like the UT Ethereum choice because it's like more like the majority decides. There are other flows also because the computation time is quite large. It's not that very large. It's polynomial in the number of options. So I guess in this case, it could definitely be applied. But if you think about the YouTube algorithm, if it has to choose between one billion videos and the computation time is n to the power three, well, one billion to the power three is too many. But then you can definitely have approximations of the randomized Condorcet voting system. Well, yeah, so after reading this paper, actually after the second time I read this paper, I realized that maybe this favorite voting system of mine may actually be again my favorite voting system and maybe actually relevant. Maybe I should promote it more than I have been in the last few years. So, but what about what you said that it doesn't maximize the sum of utility of all the voters, but it would make the choice that satisfies the majority? Yeah, so like, I guess the vanilla version of it should not be the one, but I think we should try to build upon these ideas. The problem is that if you usually don't include a principle of majority somewhere, it's usually not going to be strategy proof. Like it's sort of like the trade-off usually, but you need to better understand. So one interesting thing about, so I should say also this framework has also been more or less supplied to all the settings. So there was this paper about self-driving car dilemmas. There was another one about kidney donation. The same problem that you want to choose to whom you give the kidney or whatever the, yeah. And what's interesting in the case of car dilemmas is that there are huge cultural differences. For instance, people in Japan want to save the workers, whereas people in China want to save the drivers. That's what came out of the data. There are lots of caveats, but yeah. And so you can ask yourself how should self-driving cars actually be designed in practice? Like should we follow the Japanese ethics or should we follow more the Chinese ethics? But what turns out to be perhaps more relevant is the fact that you can take into account the fact that you can program different ethics in different cars and maybe you can have an ethics for Chinese car in China and Japanese ethics for Japanese cars. And this would actually reveal the fact that what happens is that people in China probably have strong preferences about things that happen in China, whereas things that happen in Japanese for Chinese people is less important. And I think we have the same sort of feelings like here for instance in Switzerland, like right now we care a lot about this coronavirus, for instance, happening in Switzerland and it's harder for us to care about things that are a way. And the voting system should probably try to capture this, like the fact that not all problems, we don't care equally on all problems, like we should give weight. And maybe if you start to introduce this and combine it with like a modified version of the majority principle, maybe the majority weighted by how much they care about this issue, then it starts to give something that seems that has sort of like both the somewhat good properties of additive preferences like adding all the preferences of all the people, but also have this very important property of a strategic proofness. But yeah, that's like my intuition right now. There's a lot of research to be done. And yeah, after thinking about this like over the last couple of days, I felt like maybe that's a research I could do. So in the paper, they didn't really have different problems that concern different people, but still for this one point they were solving food donations, they waited differently. They asked the participants whether they should wait differently in the opinions of different stakeholders, like the people donating the food, the companies donating the food, the volunteers that drive the cars, the member of the association. And then most participants said yes, the weight in the votes of different stakeholders should be weighted differently. And they gave the most of the share of the votes to the members of the association that because supposedly they know more about what's happening and they are a better place to make a good decision. Yeah, and less weight to the food donors. And what's interesting is that even the food donors said this. That's really interesting. I guess this is a prior vote on who should vote. But did this vote, how was it weighted? How do you weigh the vote about how to weigh votes? But yeah, that's interesting. That's really interesting because like suppose we wanted to, well, I definitely want to construct more and more of these frameworks for all the more impactful applications like for instance for the YouTube algorithm, like what should be recommended, what should be moderated on the YouTube algorithm, by the YouTube algorithm. I think it's a big question and definitely like do not apply it. We build a nice shred forwardly, like there are a lot of caveats. We've talked about a few already of them. But like I think we should aim towards this. Like I think it's a good, it's an interesting framework. And if you do this, then the question of who votes gets, it's really important, like it's really critical and it's really non-trivial. Well, I guess one very bad idea would be one YouTube account, one vote. That would be a very bad idea because you can imagine a lot of votes creating a lot of accounts. And then you can say maybe all human that has a passport, whatever, can vote. But this still feels like maybe it's not the best way to go. Like for instance, on questions like vaccines or right now like coronavirus for instance. Should every voter be given equal amounts of votes? Well, it's not clear to me that it should be the case. And maybe if you ask people, maybe people will not answer. Like it's an interesting thing to do at least to ponder these questions. Like who should be voting? Who should be given rights to vote? Maybe it depends on topics. Yeah, I think it's a big research area as well to use. And it's actually the first time that I've ever seen this question raised actually in the academic paper. Two people, because there are a few people who have thought about restricting the rights of votes at least on questions that require a lot of background knowledge. Two people who know better. And if you think about it, that's what's happening. Like the World Health Organization is more influential than many other people when it comes to health. And I think that's a very good thing. And so yeah, this is the idea of epistocracy. So literally epistocracy means the power to knowledge as opposed to democracy which means the power to people. And at least in some areas it seems that epistocracy is an interesting way to go. If you think for instance of arguably one of the most reliable sources of information these days, Wikipedia, the Wikipedia page for instance on Trump is written by I think about 200 or 300 people are responsible for 90% or 95% of the page. So most of it is due to a small proportion of the Wikipedia contributors. And that's probably a good thing because they're doing a better quality job because of this. Yeah, I think this is a big question. Yeah, when discussing epistocracy, the only way I think that it could go wrong is that the people that are allowed to vote or to influence the system more, they might have different, very different preferences from the rest of everyone. In that case, they would probably influence the system in the direction of their preferences and that's where it doesn't go well. Yeah, that's definitely a concern and it's actually something discussed in the paper as well. The fact that in more generally, whenever you're trying to query data from people, from people's preferences, it's very hard to have an unbiased sample of the world simply because most research are not done in India or China or many more people. Like you had typically this problem for the web, so the MIT had this website called Moral Machines where they post these self-driving car dilemmas and then there's the data, but probably it was very biased data, like people who actually responded to this kind of research or probably people who have a strong interest in moral philosophy or in self-driving cars or in technology. And typically what you can imagine is that Chinese people who answer this question are probably wealthy people. And it's the same thing for Japanese people, so the reason why Chinese wanted to save the drivers and the reason why Japanese wanted to save the workers, maybe that. So maybe we don't even know if Chinese really want to save the driver more than the worker because we only observe the very biased sample of the Chinese population. And so there's also this challenge of, so let's assume for now at least that we wanted to give one person one vote. We would not be able to have an algorithmic, precise at least algorithmic representative for every people. But what we might be able to do is to have, using a probabilistic inference for instance, you can have an idea of what a representative of a typical Chinese might be like by trying to get some data from some Chinese and trying to generalize and say, maybe most Chinese are quite similar to this guy. And so there may be also ways using this algorithmic approach to unbiased, the data collection bias, due to the fact that most people who respond are usually more wealthy and more informed. And that would be also a research era that's interesting, but yeah, very interesting. Why do you think that we can't have one algorithmic representative per person? I'm thinking that as we all have a smartphone, it could be that 10% of the hardware of my smartphone is used for measuring my preferences and voting for me. Yeah, so like you could do inverse reinforcement learning. This gets us to the difference, I guess, between preferences and volition or like at least, well, so what was very interesting also in this paper is that because of this, well, just people get asked a lot of ethical questions, it got people thinking about ethics. And people actually changed their minds or evolved in the way they were thinking about these dilemmas as they were queried. And also you could also show that they were biased by the way they were queried. Essentially like machine, the machine learning model approach, what the comparison based approach is biasing people towards more emotional responses because these are true cases like data assurance. Maybe that was wrong to show true cases They were not automatically generative? Like from what I remember from the paper, like it was like they knew the organization, they knew the two options, like, oh yeah, I know I've been there. And if you include this, yeah, you're going to get more emotional, you're going to think about the people, you're going to have this more, yeah, emotional approach to answering this query. Whereas if you're trying to describe a process through which you can arrive to a conclusion that depends on the features you are given, then you have a much more abstract approach to this kind of dilemmas, probably more reflective as well. So this leads you, well they show that it leads to different conclusions and people also felt that it was leading to different conclusions. And yeah, I think like in terms of moral philosophy, this is amazing, like you get people to actually think about moral philosophy and you see people having this tension between different parts of their brains thinking in moral dilemmas in different ways. I thought it was very interesting for this as well. Yeah, in the end, slightly I remember of the participants found that the machine learning based representative was better at predicting their own preferences compared to the rule-based. But I guess it's because it was quite complicated to well design the rule-based to do what you wanted to do while the machine learning was automated to predict your comparisons was better. Yeah, so I don't know exactly the detail of the rule-based algorithm, but like I feel like essentially like it boils down to I think the, well you only have six features. Like it's very simple, like it's only six features so writing an algorithm is just giving way to different features I'm guessing. Yeah, and one thing important to mention is also that the participant were not familiar at all with the machine learning, the data science, with the computer science, et cetera, all of this. So somehow it seems like a very difficult task to tell them, for someone who knows nothing about algorithms, to tell them now train these algorithms that will take decision for you. It's very unintuitive at first. But they were quite surprised that the participant would quite quickly understand these algorithms and also by seeing how the algorithm works somehow get to trust their representatives. Plus the fact that I think at least one participant reported that after doing this experiment they were looking at the real world quite differently because they understood better now the concept of algorithms taking decision daily and on a daily basis. Yeah, you can imagine that this is a paper for designing AI's. I think it's fantastic already for this. But you can also regard this paper as a way to teach about algorithms and moral philosophy all at once, okay? It's amazing, it's absolutely amazing. People realized that, so they were introduced to people who said they felt that it was more complicated than they thought. Like it was a difficult problem, it got them thinking. Like if you have this kind of framework, AI personally would love to just play around with the software and to learn how I think, like it's also psychology, like learning how you think about these problems. Like yeah, I think it's absolutely brilliant paper and the background of the authors, I think most of them are computer scientists but they were also like social scientists and yeah, I think this is absolutely amazing. And you can think of that like implementing this for educational purposes. Like you can definitely imagine like if they have, maybe they do, I don't know, if you have like a free available version of the software, yeah, if I'm teaching a class on, especially on AI ethics, yeah, I would definitely give this as an exercise. You just construct your own preferences and go ethnic representatives and understand the social choice and understand, yeah. Yeah, it seems great. Another topic we often discuss is the notion of volition, which is that we have things that we prefer right now without thinking much about it, but if we work more calm, if we do more, what, how would our preferences change? Like what we would really prefer if we were a model. And it seems that interacting with these algorithms gave the participants somehow a step in the right direction from their preferences. We can even count at the first level there what they were doing before these algorithms was implemented before we build AI. We saw later in the data that it was not very great for several biases they had. Then when they started using this we build AI algorithms, they must have been slightly better. And after interacting with it, they, after reflecting a lot more on what they should really prefer, they came up with a very different solution that was shown to have a lot less of the biases, the undesired biases that they had before. And so it seems like it's at least a step in the direction of computing evolution. I believe that this method would still have a lot of human biases in it that we don't desire, but we don't yet know what they are, otherwise it would be easy to remove them. And so I think there are still different techniques to use to really capture the volition of people instead of capturing the preferences. Yeah, and the interpretability of it is going to be critical. If you want to deploy systems that we trust, even though they're supposed to compute ovulations, like you can imagine this algorithm that you say, well, I prefer to give to this charity because maybe you've been there maybe, but you don't realize it's because you've been there that you want to give to this charity. And then the algorithm say, no, no, no, no, no, like if you thought longer, actually you would give to this charity. It's quite, yeah. And it's not necessarily a problem with the algorithm. Yeah, like you can think that it's a problem with the way people are thinking. So, yeah, the way I pose this, when I gave a talk about this a few days ago is, there's the problem of trusting, like we need to trust a lot of things when there are these complex systems, you need to trust that the algorithmic representatives are well-designed, you need to trust that the social choice mechanisms will be applied. And I gave the example, in democracy, like in some democracy, we tend to really trust the way the voting system works, but in other countries in the world, people from these countries, at least some people from this country don't trust the way the voting works. So it's not easy. And then, yeah, maybe you trust fully the algorithm, but the input of the algorithm or what you say to the algorithm, and then you have to ask yourself, do you trust yourself? It's not that easy if you think about this, like you trust yourself that, yeah, you're doing the right thing. Like, theoretically, we tend to think, yeah, I'm doing the right thing, but if you talk about actual cases, like for instance, what should be communicated about the coronavirus or whom to give to, do you trust yourself really? Like, do you ask yourself, like, is this a problem of intellectual honesty? Like, yeah, should you really trust yourself? And then, you also need to trust others, like, because this is a participatory framework, so the others' inputs are going to matter to what's going to be done in the end. And if you want to trust the whole system, you also need to trust that others will be thoughtful enough and will be doing what we actually be thinking enough and having the right amount of confidence and not being overconfident in the wrong ideas. So we need to trust the whole thing, which seems extremely hard if you think about this. Yeah. Yeah. And so, yeah, before, like, I think it's an amazing first step that I think I've stressed it enough, but if you want to get this deployed on large-scale systems, like, the YouTube becoming the systems, I think there's a huge amount of work still to be done. But I also think that there's a lot of... So one question that is often raised when I talk about these things is, like, yeah, okay, this is all nice in paper, but, like, will YouTube actually implement these ideas? And I'm actually quite confident that YouTube will be applying these ideas if there are better tools. Right now, I don't think we're there yet, but because if you think about it, for YouTube, right now, they're imposing their own ethics in their algorithms. And this is extremely dangerous for them as a company, because if there's a backlash, if there's something that goes wrong, they are the ones who get to play me. And they can... Yeah, if you... For the company Genetica, it can be very, very harsh, harsh for them. And if you want an organization like this that's... If you work in this organization and want it to survive and let things go well, it seems actually a good idea to outsource the ethics you have instead of a company-driven ethics to have a user-driven ethics. Because if it does not work that well, you can just say, well, we were implementing hopefully the best algorithms proposed by academia to do these kind of things. And the ethics were user-driven, like we're not responsible for this. I think they would still be counter-responsible. They would definitely still be counter-responsible, but arguably they're a lot less like... But so what you imagine the user input would be, I prefer that the YouTube recommender system recommends that kind of video. Yeah, at some point, ideally I guess that would be more or less the case. Like you can see how... Ideally, there should be a whole framework. Maybe the votes also should be maybe a bit anonymized, but it's somewhat transparent that the aggregation of the votes should be transparent. And just that you better understand what all the forces that shaped the decision of the algorithm. Maybe like if there were like... If you could see that well, because of the World Health Organization and maybe of all the entities, while this kind of vaccination video were more pushed forward, you can see that while there was a subset of users that tend to watch these videos or these videos, and they tend to push forward these ideas, like that this video should be better recommended. I think it would give more transparency. But yeah, obviously also there's a huge psychological challenge here, like trying to get people to think about the evolution, the inner preferences, trying to be more or less overconfident and... Yeah, so one thing we discussed yesterday is about WHO that nearly no one clicks on the video of the WHO, so it's not being recommended at all to anyone. But if you ask people their preference about whether the WHO should be recommended or not, we expect a large number of people would say, yes, this is something that's good to be recommended. So somehow it's a difference between how we behave on the platform, so what we click on when we decide what to watch and what we would better want that the platform recommends. Yeah, and definitely I think there's a lot of interesting research to be done about the difference between user behaviors and if you do inverse reinforcement learning, you're going to learn what people do on a daily basis and the preferences that drive them to do this and that's not really what we want, and that's maybe not really what people want, but then you need to engage with them and you can do this comparison-based approach, but it's still going to be very emotional, maybe the examples or real-life examples and things that people have lived through, and then there's this more abstract approach and all of this should be better combined, better understood, and also you should have more and more tools to design all of these things. Yeah, it's a huge mess right now, but hopefully we can sort this out. Yeah, in that case, the work you did is quite amazing. It shows that how to use algorithms to do more good, and I always like that. Yeah, yeah, this is a fantastic paper. Strongly, strongly recommend reading it. Yeah, so thanks for watching this video. Next time we're going to do something a bit different, we're going to talk about YouTube videos and especially the series of Smarter Every Day about social media manipulations, which is very interesting with interviews of people from Facebook and Twitter and we hope we'll see you next time.