 Hello, everyone. In this episode, we discussed the recent Netflix documentary, The Social Dynama, that discussed various aspects and what's going wrong with social networks. And this is a topic on which Le and me have a significant part of our book deals with, and which was very important in the recent years. So as we would like to say always, like when we tell people we do research on AI safety, they think about killer robots and so on, and always tell them, no, no, it's more focused on, not on the killer robot, but on the recommender system that shapes your life and your decisions and your opinions. And it's good that this documentary appears because now more and more people are finding that arguments like ours, which two years ago, even at circles, academic circles of AI safety were considered as exotic, are now gaining momentum. So the documentary invites a few former employees of large tech companies to give testimonies. Unfortunately, there is no actual employee, which we can understand because when you are hired by these companies, you have to sign non-disclosures, non-disclosures agreements and so on. But also, unfortunately, the set of people that were invited is not really a diverse set of opinions of what we can have from these, even from former employers. And in particular, many people have left like quite a while and almost none of the interviewers, the people who were interviewed have left recently. So none of them have like a recent insights of what's there. So for example, for instance, like at least two of them left in 2010, like in the digital age, a decade, the change of Facebook between 2010 and two days is just so huge that maybe the insights you have, some of the insights are not relevant anymore. Maybe you need new insights, et cetera. So maybe like Louis would tell us, so what's the real problem? What's the problem here and why people decided that we needed this documentary? So yeah, so the documentary tries to identify what is the program. It's very complicated because there is a lot of dimension to that problem. One of the main points that they identify is the business model on which large-scale social media and social networks are built. And this business model relies on advertisement a lot and on maximizing the time users spend on these platforms and getting as much user attention as possible. So this led to the famous saying of if you're not paying, then you are the product. And this documentary explained this thing quite better because before watching the documentary, I knew about it. I heard it from many places already. But after watching the documentary, I better understood. So what the way to think about is that advertisers are paying these social networks and the goal of this investment on advertisement for social network is to change slightly the behavior of all the users of the social network so that their product are sold better. So what these social networks have learned to do is as much as possible, being able to influence the behavior of their users to sell these changes of behaviors to the advertiser company. And so they say, quoting what they say in the documentary is gradually and imperceptibly changing how you behave, what you think and who you are. That is the product. And this advertising model is also put in parallel with the power that this social network have built specifically a computing power. They own a thousand of supercomputers that have a sole role to be able to anticipate and predict our behavior and compute what will change us the most so that they can get as much benefit on their given their business model as possible. There is this nice story in the documentary where you see one user and a virtual clone of that user which is like a puppet and we see three persons that symbolize the supercomputers that are computing the recommendations from the social network and deciding what to show on the thread on the page of social network what to scroll for. And it's really crazy how these pictures, the way social networks pick what to show us in order to manipulate us as much as possible. Yeah, from a purely cinematographic viewpoint and also like the logical viewpoint, I think this is the best idea of this documentary because a lot of this is very classic but like the idea of, I mean, classic in people who think about these technologies but the idea of replacing this algorithm, this recommendation algorithm, this very abstract recommendation algorithm by three people who do the reasoning out loud of what the algorithm is doing. I think it really better highlights like all the why, why it is that people see, for instance, what they see on the news feed. And it really contrasts with them. Well, it's really on the lines, one of the prime of the systems is that people don't realize that a lot of the things that they're exposed to is the result of an algorithm. And when we talk about AI, artificial intelligence, as many said in the introduction, a lot of people will be thinking about robots and stuff like this. But the algorithms, the AIs that are the most impactful today that influence billions of people every day, are the ones that are, well, in these phones that interact with us every day. And the danger of AI is not this very spectacular thing that we will recognize once we see them. It's more like this really subtle thing that has invaded our lives and we don't even realize that we constantly are interacting with them. Yes, exactly. They say in the documentary that AI is already controlling the world. And it's quite easy to agree that given the scale of the decisions that algorithms are taking with it. So maybe just like to repeat an argument we bring a lot, we used to bring for the past several years, like for the past four years in the book and some of the videos in this channel, like most people don't really realize the scale. Like you can give the YouTube number again. Yeah, so it's like 2 billion hours of watch time per day. For one video hours of watch time for 2 billion users or half an hour for 2 billion people on Earth. 70% of which according to the for the CPU, the chief product officer or the former chief product officer of YouTube. 70% of these videos were recommended by the algorithm. Yeah. And so most of the videos are recommended by the algorithm. And it has to be an algorithm that does a lot of the work because like this case are huge, like billions of recommendations per day. There's 500 hours of new video per minute on YouTube and they have to be screened for copyright and for pedophilia or like very... So we'll bring maybe an argument that also we hear less. We hear not, we don't hear a lot, not even maybe in the documentary. Yeah. So that's something we say in the book is that the technical challenge is way beyond imagination. So good intention is not enough. So you have a good intention. I would like to filter, let's say, child pornography and doing it in an automated manner when you have an influx of 500 hours of videos per minute, like 30,000 hours of video per hour. So if you know a bit about statistics and fraud detection, so most of the performance fraud detection algorithms are very costly and they take time, like quadratic, they don't scale very well with the inputs. So running them on videos is not practical. So there is a technical challenge that is often shaded by the discussion on the intention, intentional challenge. So of course there is an intentional challenge. You need to have good intentions or otherwise you just screw the word. But even if you have good intentions, that's not enough and you can't implement good intentions just because you have good intentions. Yeah. Yeah. One of the most important crux of disagreements that we may have with the documentary is that the documentary insists a lot on the incentives problem. And if you think about this, well, we can discuss this at greater length, but it's actually not very specific to the advertisement business model. Like if you think of Netflix or of China, for instance, they don't have these selling ads incentives but they're still like the same problems. Yeah, Netflix, like we mentioned that in the book, the CEO of Netflix or the CEO. CEO, CEO, CEO. I would say that their main competitor is sleep. And so they're aware that they're really competing with sleep because like, would you go to sleep or would you watch another episode? And they're not even ashamed, at least not sufficiently ashamed of this to prevent the CEO from seeing out like this and that. Not only that. Not only that. So for example, on Twitter, you see a lot of the community managers of Netflix. I guess they are incentivized to do that. Some teenager would tweet, oh, tomorrow I have school at eight. And or watch Netflix. And then Netflix would code the tweets with the official Netflix account and like make a funny joke about the fact that the teenager is really addicted and the teenager, she or he would end up watching another episode or just like the Netflix code, tweeting and saying, we are not responsible of your, like you really have to go to bed. It's funny, but I found it odd that Netflix didn't, like the documentary was produced by Netflix. So I would have loved that there was at least a part of like self-critique, like self-reflection on what Netflix also faces as an addiction problem. Yeah. Yeah. Just to say that the incentive is primary. I think the documentary is a bit narrow by saying that the prime is advertisement. I think this is not like, it's just like these things are extremely influential and whenever something is extremely influential, well, there's going to be a lot of incentives. And so we need, you have this incentive problem, but it's only part of the problem. If you want to solve the problem, I think as Mehdi said, there's a big technical challenge and we actually need, in addition to fixing the incentives problem, we need also, I think a massive investment into better understanding the problem, better providing better solutions in research, sincerely in century, because the moderation of all the information that goes through the internet is something that's really critical. And if we don't have like the top people in the world and maybe thousands and maybe even hundreds of thousands of people working on this issue, yeah, like the good intentions will not be enough. So yeah, I would have wished to see the documentary a lot more so, conveying this message of saying that, yeah, if you are interested in this problem, if you are especially, if you are a philosopher, a sociologist, or if you are computer scientist, definitely, or even mathematician, well, try to study the problem and find out what contribution you could make because we need a lot more contributions to solve these problems. Another aspect of the program that's also mentioned in the documentary is the problem of misinformation. They give some numbers, for example, the fact that some study revealed that fake news spread six times faster than true news that using fake news, you can make more money on social networks simply because of the higher engagement that's come from it. So somehow the way social networks are built today gives a bias towards an increased number of fake news which is something that we don't want to end up with. People might think quickly that it doesn't matter so much, but unfortunately, these messages that people are shown influence their behaviors in real life. So there are plenty of examples of people going out and destroying 5G towers because of misinformation and a lot of other things. Like the WHO mentioned that vaccine hesitancies due to misinformation on social networks was a critical health issue. So I think it's interesting to talk about the disagreement. I think we mostly agree with a lot of the things that the documentary said. But one thing again that you can notice that the problem of fake news is not only about these algorithms trying to maximize for advertisements or for watch them. There are emails or for WhatsApp, like you have these systems where there's much less algorithmic recommendation or moderation. And you do have a lot of fake news that circulates arguably a lot faster. Like it's harder to have data about these, of course, but the problem of misinformation is not only the algorithm. It is partly the algorithm, but it's not only the algorithm and the problem of trying to either remove or at least less recommend misinformation and to promote quality or information. This is a very, very difficult problem. And we need to arguably solve it at some point. Like if you want to address many of the big changes of the years ago, like the current situation for instance, climate change, the next pandemic because maybe COVID is arguably cute in a sense. Well, it's very, very bad, but it's not that deadly. Like you could imagine something a lot deadlier in the years to come. So we need to prepare for all of this. And this requires better, more quality information, people more prepared for all of this. Yeah, that's one thing, Tristan Harris raises in the commentary that he says, if we can't agree on what's true, then we are toast, citing most of the challenges that you said there, the next pandemic, climate change and other things that could come up in the future. Yeah, so I think I've heard Tristan Harris saying that his main motivation for trying to make social media better was climate change. And I think it's a connection that's not made enough. I think if you want to protect the environment, I think the best thing you can do right now is trying to, for one of the best thing, the most effective way to combat climate change is to make sure that there's better quality information circulating on social medias about climate change because it's a very complicated topic. And there's this one question, is there a climate change? And that's only part of the problem again and can ask many more questions like for instance, how important is it to reduce meat consumption? What is the impact of nuclear energy? And on all of these topics, there's a lot of, there are quite a few information that's not promoted enough, I'll give it. Yeah, so the next problem that they raise in the commentary is the problem of political polarization and manipulation. So I think manipulation is also something that is not relevant in terms of the advertisement model. So it's a problem that arise from the other. So simply that this social network has huge influence because they show content to billions of users every day. So simply for this reason, there will be some actors will try to manipulate them because they want to have control on what information is spread there. This manipulation often is as a political orientation. Yeah, like one example, just imagine how bad mass manipulation and political polarization is there's this example of Myanmar, for instance. So in the north of Myanmar, there's this ethnic community called the Rohingyas and they have been persecuted based on misinformation. And well, it's sort of racism and this has led to a genocid like a lot of things was like tens of thousands of people died, something like this and hundreds of like 700,000 people refugees. A lot of million refugees. I think the figure is the same one. But this is like huge and this like originated from a lot of misinformation circulating on Facebook. Over 900,000 refugees have fled their homes. Well, a million, a million refugees and something like depending on the numbers, you can find like the United Nations High Committee on Refugees List recognizes at least the order of one million. Yeah, and it's very important here maybe to say that Facebook acknowledged that it failed to act most importantly is after pressure from the UN, but not only it said it failed to act, it recognized that it's also actively dismissed calls for action from human rights groups. So here like we clearly have like an incentive problem, not only a technical problem. Yeah, but what's interesting is like the problem has been recognized, I acknowledge at least by Facebook which is quite a big step forward. So hopefully if like more of these messages go out there, then it's going to be easier also to put pressure on Facebook like even Facebook acknowledged that this is a problem. It's easier to incentivize Facebook and people within Facebook to take care of this problem seriously, even though like it's hard like and lately there's been some some Sophie Zang former employer or Facebook who were through memos that should not have been leaked. But like essentially like the bottom line is she strongly believes and from if she was in the inside that this is a completely dismissed like two way to dismissed at least within Facebook so far. And she said this especially for like smaller countries for which if there's some disaster it's less likely to make the front pages of newspapers. And this is again a bit of an incentive structure problem because like within Facebook there's this public relation team and they're trying to avoid like public relation disasters. And if it's like a small country they are much less incentivized than it is if it is like a rich country. Like it's not even a matter of size of the country is more like whether it's like how likely it is to be talked about in the New York times for instance. Well, all of these problems are probably to be exacerbated during the coming weeks. We're talking about weeks now for the US. Oh yeah, while we're recording this we're like 40 days away from the US election. Yeah, so the day before we were recording this I hope we'll be fast on editing the video and uploading it. So the day we were recording this like the yesterday Vladimir Putin so this was reported in Reuters. Vladimir Putin invited the US to find a deal like a peace treaty some form of peace treaty on digital misinformation. So he said like I invite like the US to sign a treaty where each of us commits to stop doing digital misinformation and in electoral interference. You could see this as at least an origin that they do it. An origin that they know they can do it. Yeah, and invitation to just like sign a deal and he compared this deal to the 1972 deal between the USSR or the like the Soviet Union and the United States of America. So there is a raise in awareness of the scale of the problem not only of the scale of the problem that the problem could become out of control. So because if I'm a big state and I can use this tool why I would sign a deal with another big state to stop using the tool? So arguably because like from a just came theoretic point of view, I know that like this deal I'm like it's better for me to sign this deal with the US if I'm Russia and vice versa because if I don't sign it, like I would not only face the states of the United States or vice versa of Russia but I would face an intractable phenomenon like wherever there is a group of five people with enough money, they could start attacking me like last year Facebook, Twitter reported a misinformation campaign originating from Saudi Arabia and the United Arab Emirates and also Egypt targeting an Arab audience that was successful in reaching 14 million people quite effectively like in the sense that in this 14 million people I think they only counted people who like follow the Facebook page like actively like the content like not just the I don't think they were counting so for example, if I like the page and Le sees the content because I like the page I don't think Le was counted so it's like real interaction I think which is a big scale and even if it was indirect interaction because the cost of the campaign was only $100,000, $100,000 is like the cost of and this is an example we have in our book it's less than the cost of an electoral campaign in a small city in Morocco or Algeria like in a poor country. So with this budget you can reach 14 million people so I guess if you multiply it by 10 you can reach a larger scale and this budget is within the reach of like it's within the reach of a median a median network of let's say what a French citizen so imagine if like whenever you find the five motivated people to put $100,000 together to attack a state they can do it so it will scale and it will become out of control so there is an incentive for big states to sign this kind of agreement and I think it's a good thing for the world to so that they sign this kind of agreement and then but then like to enforce it it's also a tangent because you have to control how it's happening but now it's good. I just reconciling that this kind of cyber warfare is one of the main concern that we should have for a lot of institutions I think this can change a lot of the way we think about these problems and especially I think in the end of what I care most is much more investment in two solutions. And then maybe like just like to mention this like the point in the book we talk about the democratization of cyber warfare so just we just gave an example like it's within the reach of small groups of individuals to start attacking states or at least like start attacking significant portion of a state's opinion. So like it is really important we start looking at solutions and also anticipating how this will evolve. Maybe I can say something about that. Yeah, one other disagreement I would say we may have with some at least some interviewees in the documentary is that I think there's a lot of the documentary almost presented it as a consensus that we should regulate these platforms and that regulation is the only way to go. I think this is a dangerous method to believe that it is the only way to go. Like for sure, like we need to reason about like how do we better control the system? So like regulation is a desirable thing but like if you want an efficient and regulation is probably going to have to go through at least the government but probably through the Congress and stuff like this and it's going to involve a lot of people. And whenever you have a decision that's made by a lot of people is going to especially if it's get politicized and like these days like a lot of topics are very politicized in the U.S. Especially for the world, but in particular in the U.S. And there are many ways it can go wrong I'd say. And even if it doesn't, it's going to take years like these things do not happen like that. And we have an example of such a law in the European Union or the GDPR, the General Data Protection Regulation. This was written up in 2012 and it took six years for it to be enforced in the European Union. Six years is quite fast like in the law in general. And it has to take a long time for everyone to agree but also to make sure that the law really works like it's meaningful and applies in practice. But six years in terms of artificial intelligence, algorithms, social networks, in this world six years is like is the lifetime of a company like of many companies. So it may be way too long and the way these social medias and the algorithms are going to work in six years may be actually very different from the way it is now. So just imagine back like in 2014, even six years ago, trying to predict the algorithms that we have today. It was extremely hard. And to make laws that fix the way algorithms are today is extremely hard. So I think there are caveats that need to be taken into account when you're thinking about regulation. And well, all in all, it suggests that it's not going to be sufficient and there's even a risk that it fails or that it becomes counterproductive because some politicians push for something that was actually not very useful. I think it's dangerous to think that we should put all our energy into regulation because like one consequence I have in mind about this is that top scientists, top engineers, top managers in these companies will say, well, we're just going to wait for the regulation to come. And I think this is like, if you have to wait five years, I think there are huge risks within this timeframe. And instead, I think we should be proactive as of today. Anyone who can and we could encourage like managers with heart but employees, software developers and also researchers within these companies and outside of these companies to just like tackle the prime themselves. So taking the whole prime is very hard but trying to promote ideas that go this way to work on some potential partial solutions to better understand the challenges, to like ask for journeys to better inquire what's going on in these companies or what are the impacts on the users. That there's a lot of stuff to be done that are not regulations and I wish these both. One more program that is mentioned in the documentary is the program of mental health and how especially young users are affected by these platforms. So some statistics that don't directly show that these platform are causing suicide but we can see an increase in young girls of suicide that is threefold between 2010 and now. And this is quite scary because one thing we can, one thing we see obviously changed between the life of young girls in 2010 and now is the increase, exponential increase of social medias. Other statistic that are not very happy to look at is that young people get a less driving license and also have a less romantic relationship than they used to. And we can understand this as a side effect of spending so much time and attention on the social network that are extremely addictive. And in the documentary, they show this example of a younger that spends her time taking photo of herself posting it on social media so that you can receive likes from friends, from people that follow her and that tell her she's beautiful. And the few times that she received negative comments, we see that it's extremely strongly affect her emotions and we can understand this is the case of a lot of young people using social medias. There is the same trend happening with a TikTok that is said to make you become extremely famous from within a split second without even realizing that you go from 10 people following you to 100,000 people liking or commenting on your posts and photos. There is a lot of research about this but maybe there needs to be even more research about this because clearly, social medias have changed the way we interact with a lot of different people and there are a lot of very concerning data especially about like from some depression, suicide. There's also, I think a concern to be had about like for instance, the increase of loneliness can have also all sorts of other impacts as well. Maybe an increase of aggressivity. So there's a great video by Quartz-Kerzak about this and then this can create all other sorts of problems. So I think like mental health is extremely important in itself but it also has impact on other aspects better like more curiosity, and vicariousity is extremely negatively impacted by all of this and this hinders our ability to discuss complicated topics such as climate change. Yeah, so again, I think this is a very concerning aspect of the social media zone. I just wanted to mention something here, like not like mental health but like psychological part of social media but just like in this part on psychological aspects and the previous part on polarization. So I've been running an amateurish research for more than a month now on a group, like on a network of fake profiles, clearly like fake profiles which looks like states or at least state paid for clearly run like clearly motivated by a state. Like it has the scale of something that is actually even state either state run or paid for by a state. Something like an emerging phenomenon that this is still like observations and I would really like researchers who have the data if they ever watched this video to test it which is now like beyond the classic trend of fake profiles to harass people and tweet at people or like promote an opinion. That's like a classic way to do bots and trolling. I'm remarking a new trend where you create an army of trolls not just to harass people or tweet to them but to give them likes for things you want them to say more and not for things you don't want to say. So like instead of creating fake profiles that like let's say Louis and Le would be on Twitter and they are known. I am a fake profile. So if I'm starting to promote my ideology nobody would believe me because now people are more and more aware that they are trolls. So people would look at your date of creation of September 2020, probably a fake profile I would dismiss. So instead of me tweeting as a fake profile I would look at Louis and Le and if Louis starts saying something that I want to be promoted I would give like I would bring my army of trolls and send to Louis 30 likes or like 20 retweets. And then when Louis is not talking about my ideology I would not give Louis any like or retweet. And what I'm observing like over a month that like there's evidence that some people start to be nudged by these army of trolls. So which is more efficient because if Twitter detects my army of trolls I lose the army of trolls but I don't lose Louis who is a real human. I would just bring another army for the next month and then start nudging Louis again. So Louis becomes my tool. And I really, I really like, it's not my field. This is clearly not my field. I'm a theoretician, I do mathematics but I would really like to see more research into this new trend of using trolls to influence people by sending them likes and retweets. Yeah, and you can imagine like if there are like hundreds or thousands of fake accounts that do this you have people who will receive like hundreds or thousands of likes which is like for most users, for me at least it's a huge amount of likes and even if you try not to care about this but most people care about this by default, I guess but even if you try not to care about this it's very hard to be neutral about this. Like you always receive this dopamine as it's discussed in this documentary and it's like in the research and this changes, this is like reinforcement learning and this really changes people's behavior and it's because like not only like a hundred of likes is a lot for an average like just 10, they don't do it in an obvious manner so they do 10, 5, 15, 20 and like when you look at the likes you find the same army troll, like I've been following a group for more than a month as I say, now it's like recurrent I see a suspicious tweet that is from an unknown person a real person I know that got suddenly like 20 or sweets and then I go to the list and I see like I recognize all the list of trolls that are already in my list of trolls and like with 20 likes, 20 retweets you can not only not people you can also make people famous like you can, if you start like an account starts having a hundred likes or tweets people start thinking it is relevant because of exposition bias like you expose it to this account it has like lots of likes and retweets and then the accounts of Louis like this apothecary little person I'm trying to influence starts getting more real people following Louis so not only I can nudge Louis it was like I can make Louis famous and then use Louis even more yeah I really would like like this is something clearly not researched like there's a lot of research on the classic use of fake accounts and try to get in touch like this like my email in my website or something like we can discuss I have like I have like a data set that you can maybe use to start and then grow and maybe you can just show that my hypothesis is wrong so I have like one month of one month of daily like about 10 hours per day spent on this and enough evidence to believe it yeah so it's still preliminary research but it sounds like something so effective that I would be surprised if like no this information campaign has used it yet so finally the in the documentary they discuss also what are solutions to that problem so they mentioned previously regulations and why they are very slow processes that that are likely not to be useful enough in due time also the goal of such regulations would be to change the incentives of these platforms but we also discuss that the incentives and the money from advertisement is not the only problem and there are other problems for this like manipulation by other actors for example yep they discuss about the possibility of deleting your social medias and accounts and that is something that sounds okay but might also not be desirable simply for all the positive aspect and all the good things that the social media bring we can think in terms of if we want to make social media better imagine and use them to fight the challenges of humanity like climate change or the next pandemic then they are extremely amazing to to propagate the right kind of useful information that people need to hear and understand in order to to change the behavior to to act properly in for the challenges we are concerned about and other things that are amazing they say in the documentary it's for example being able to access Uber and with just one click having a taxi coming in front of your home and going where you want to go within less than one minute is an extremely nice technology to have access to yeah I think there's a lot of benefits of quality people who care about these things and to do a lot of good to social media because you can have you can influence a lot of people and promote better quality information so I think it's important for many people to stay on social media but there's definitely a risk of mental health issues and all the things we've discussed and so I think it's better to promote a healthier use of social media and one thing well I'm sure like at least you just as I mentioned mentioned it at some point but it was cut from the documentary but one thing I would have been would have been nice to see is like there are many tricks you can you can implement to have a healthier use of the social media one of them is like adding apps that for instance block social media for a given period of time so I personally use these for instance I think it's very useful and effective removing notifications I think many notifications are like extremely harmful like to have all of the time so and just like caring more like having a yeah having a better relation essentially with your phone try to avoid the addiction while using exploiting for all of the the good parts we can do using all of these I think would have been a nicer message to to send yeah and other things that that I would really like to see a more in the world is discussions around the ethics of a recommender system and agreeing that what recommender system take to show to to billions of users is an ethical question even though every decision small decision taken separately does not have a large impact but because it's consents billion of decisions and billions of hours of human attention everything together this makes it for extreme extreme problem-making challenge okay like beyond what you just said which was covered in the documentary and also we find like they did like there's something maybe original we should then can promote and we are actually the one within which is like when we say like using social networks for good yeah people like you mentioned good like example the classical examples but if we think in the context of pandemic social networks were not used as much as they could be as a cure and I mean as a cure as a as a as a medical as a medical device as a as a medical device like treatment yeah intervention intervention as a medical intervention and so people like people when we hear like talks about like AI and health AI and health are very overhyped because like what people think of when you say AI and health are like devices mineral was like plugged to your blood and then maybe analyzing your metabolism to your sweat and so connected to whatever and this is arguably not yet delivered like it's under delivering compared to the promises and the number of talks and and and and and hype over it and conferences but we don't mention social networks enough in when we say AI and health just like under system just like showing you in the 20th century the World Health Organization was was was very efficient in that like I grew up in Morocco I grew up in Morocco and in my childhood in the 90s a significant part of the TV advertisements were was was health advertisements like public public health advertisements like about washing hands or about vaccination of kids or about like yeah like hygiene and and and and things like that or like the danger of antibiotics in France you have this campaign about like eating fruits and vegetables five days per day like five times per day after every advertisement of a snack or of food products we can think of a word where social networks would be in courage or not obliged for like obliged to have a fraction let's say five percent of the ads they say they show free ads for for for for health ministries and for example like an example in the early covid is that spot like videos that were produced by health ministries over the world were not as popular on YouTube as the regular YouTube activity like the regular entertainments and the YouTube starts acting like they acted rather quickly you have this whole covid banner and like a disclaimer, etc. but the things could be done better like you could promote more proactively but then all again the caveats and then the challenge like it's a challenge because if you start promoting let's say every state run video like if you detect that this is a YouTube channel of a state and start promoting all the contents and then some states are promoting fake cures so so so the task for YouTube is not easy neither so I don't know what's happening inside I don't have a lot I don't have I have zero inside about how they did it but I can imagine that they might have thought about it and then said that then what if a state is run by a dictator that starts that starts promoting unhealthy behavior or unhealthy cures so so put in an algorithmic promotion like it's saying to the algorithm if you detect that this video is from the states then probably it's safer to promote it you end up maybe promoting bad content so it's not easy yeah again it's a very difficult problem and we need a lot more brilliant people working on this problem trying to figure out what would be good recommendation algorithms that are robust and robustly visual yeah but there's a line of research that gets more and more there's a growing research both within these companies but also in academia who care more and more about all of these issues so that has been a paper recently called ethical changes of recommend assistance for instance that really highlights this problem and yeah I think this should be given a lot more importance than it is currently given both from journalists and these companies but also from scientists themselves think it's still often found upon within science to be talking about these issues and I think it's time that more people tackle them don't want to make the episode longer than it is yeah I think it's already quite long there are many things to add I think we will have several other episodes who will talk about this line of research because we are personally involved in it somehow we wrote a book on it two years ago this was out in 2019 we are writing a paper on it now how to use social networks as a medical intervention and so we'll have another episode to talk about this like the paper that Leia mentioned we'll just put a link below yeah we will have several other episodes of the upcoming episodes we touch on the topic of social networks and how we could make them work for good actually not just pointing what goes wrong they are easy solutions like delete your accounts, delete this, delete that I published a thread when the Cambridge Arctica scandal was out in 2018 there was a campaign to delete Facebook accounts in 2018 and actually I don't like this this is my personal opinion I'm not comfortable with what Facebook does today but I still believe it's not the best thing to do because just to delete accounts or to leave it because there's a significant billion-scale fraction of people using Facebook and it's like a tiny fraction of us leaving Facebook it was just decreased the amount of pressure done by people who have enough information and educational backgrounds to put efficient pressure on Facebook I'll rather pressure Facebook than leave Facebook unless it disappears which is another question the thing is it's there and people are using it and many groups especially in developing countries for some people it's their main way to organise and socialise and communicate for them it's like some people equate it to the internet which is strange and it's like that and they will still be influenced by Facebook so it's an easy fix personally it's maybe the best thing to do for your personal heads to leave Facebook but then if you want to have a positive impact on the people remaining on Facebook maybe it's not the best thing to do but again if there is a threshold of people leaving Facebook clearly that's a good signal and Facebook will push to react so just talking about the easy solution like delete this, delete that we can clearly go beyond the delete not delete and not only that and that's something the documentary was not good at which is talking about like all all the good things that happened since 2018 all the efforts and like the massive efforts the teams that have been growing in all of these companies almost all of these companies for example the Twitter team like just like the Twitter team the Twitter safety team is they are overwhelmed with work like for example I personally struggle to get them reacts to when I like find like a state-run operation of like a network of fake profile but then I just recently learned that's a major group of misinformation in Europe which was established like it's like they are like a major group of people more famous than me more professional than me that this is their job and even then they struggle have this Twitter team react quickly because not just like maybe because they don't care about some countries and that's another problem that's also because they're overwhelmed with what's happening in the US like someone told me I've been trying to reach them since April and like they're overwhelmed like what's happening in the US so maybe like my opinion is that all of these teams in all of these companies are still under dimension like they still can be larger than what they think they are that they should be like you can still multiply the number of people working in these teams by five or 10 or even 20 and you will still need more people given the amount the scale of the challenge the CEO of Twitter recognized that this week like this week when we were registering and like recording this episode he said that like the biggest challenge facing Twitter is misinformation but again like as much as like they are not responsive and they are overwhelmed they're also very transparent and one good thing to say about the Twitter safety team compared to for example Facebook safety team is that they are rather more transparent and since they are more transparent researchers have more input from them and also since they are like when they react like even if like there's a misinformation campaign that is small scale in Twitter but large scale on Facebook you can you can go through the public discussion on Twitter and get insights about this misinformation but it goes under the radar of Facebook so one way like I discussed with people working on this and like one way we agreed on is that you find the misinformation campaign in Twitter you show that it is there and if Twitter reacts then Facebook has like a social pressure to follow even though researchers could not spot the campaign on Facebook because you spot the Twitter part of the campaign and you are sure there is a Facebook part of it if Twitter reacts and because they have a track record of reacting and being transparent Facebook has a social pressure to react and others also would react like YouTube etc so just like this was just like a shootout to the Twitter safety team like even though they are not responsive they are when they act it pushes others to act and vice versa yeah and you mentioned a lot the importance of social pressure I think this has been a bit neglected in the documentary it was a bit mentioned but I think it's still neglected like we can change the incentives by putting pressure but then now like something we mentioned and not mentioned in the documentary is that these tools are there and I think we talked a lot about their negative side like negative side and we tend to underestimate like the side effects of boycotts like we need to like the tools are there and we should use them like we should use them just like the ministries of health use the television to promote vaccines and health and hygiene in the 1980s and the 70s we should like grab the tools and use them for good and not just abandon them and boycott them yeah interestingly like these companies and especially YouTube have given more importance for instance to the World Health Organization during the pandemic and I think it's nice to acknowledge this hey in the 2000s yeah YouTube listed the vaccine hesitancy as something they would be posited like actively acting on yeah other companies also released similar statements yeah so there are good things coming from these companies maybe not good enough well I don't think good enough but it's partly a problem of incentives but it's also a problem of my argument also is the problem of human resources like my opinion is that most of these genes like the teams working in misinformation could be larger than what they are now you know there's enough need and resources to have more people working on things yep so I hope they will grow them they will grow these safety teams because I think it's more needed than ever with this we wrap up and we thank you for your attention if you stayed all the way until here and then see you, bye see you, bye