 When it came out on Netflix in September, a bunch of my friends told me to watch the social dilemma. For those who haven't heard of it, it's a documentary about the inner workings of Facebook, Twitter, and the other social media networks that explores the question of whether or not these companies are actually doing more harm than good. Even though this is a subject, I've spent a lot of time thinking about and discussing, and in spite of the fact that most of my adult life has been spent producing and distributing content on these platforms, I really wasn't in any rush to check it out. Part of my apathy was just that I was busy with other projects, but to be honest, I had a suspicion that I was going to be disappointed with the film. But everybody kept talking about it, and they all seemed to be terrified. Some people said they were going to delete their Facebook accounts, others started demanding major policy changes. I was told I had to see it, so I did. And it was awful. I know, a lot of you probably saw it and thought it was good. Maybe because of it, you're really concerned. And before we get into it, you should know that I do think there are a ton of things to be worried about with the way most of these companies operate. It's just that instead of focusing its attention on genuine problems, the social dilemma is actually a manipulative infomercial that presents dubious arguments, deceptive information, and terrible conclusions as melodramatically as possible. So before you get too worked up by this documentary, stop what you're doing, like this video, hit that subscribe button, and arm yourself with better ideas by watching part one of this two-part episode of Out of Frame. On First Blush, it seems like the social dilemma is an earnest documentary about the dangers of social media. And from my perspective, it makes three main arguments, all of which are summed up in soundbites. The most prominent of which, as stated by the film's central protagonist Tristan Harris, is this. The classic saying is, if you're not paying for the product, then you are the product. This idea is reinforced by the fact that the primary villain of the story is capitalism, specifically the commercial incentives of social media companies. Our attention is the product being sold to advertisers. This is what the people I know who got upset watching the social dilemma seem to be most concerned about. The second major theme is that the selective information we see online reshapes what we think is true about the world. Or as venture capitalist Roger McNamee said, the way to think about it is it's 2.7 billion Truman shows. Each person has their own reality with their own facts. Over time, you have the false sense that everyone agrees with you, because everyone in your newsfeed sounds just like you. This is one of my biggest concerns, but the documentary ultimately does a terrible job of really grappling with the problem. And viewers end up with what I think is a severe misunderstanding of the whole issue. Lastly, the film has one really important major theme that runs through its parallel narrative, simply expressed by psychologist Anna Lemke. Social media is a drug. This is one of those things that almost everybody implicitly believes. And there is some truth to the idea that getting likes on social media creates dopamine reactions that can be addictive. But alas, the social dilemma both overstates its claims and glosses over the complexity of this subject as well. Fair warning, I hated this film. There's no way I'm going to be able to get through everything that's wrong with it, even in two videos. But I think it's at least worth talking about these three themes in as much depth as I can. Ready? Let's go. The first claim, if you're not paying, you're the product, is a fine tagline. But what exactly does it mean? For the answer, we'll immediately turn to one of the film's most ridiculous speakers, Shoshana Zuboff, who coined the term surveillance capitalism. According to her, our private information has become a commodity used by evil corporations to control our lives, influence elections, and amass a terrifying level of power. This is a new kind of marketplace now. And it's a marketplace that trades exclusively in human futures, just like there are markets that trade in pork belly futures or oil futures. We now have markets that trade in human futures at scale. Wait, human futures trading? What? First of all, futures trading is not a bad thing. Secondly, it makes no sense to use as a metaphor for social media advertising. For those who don't know what it is, which I'm assuming is most people who felt anxiety hearing the term, futures are essentially just stocks that people agree to trade at some point in the future at a price the buyer and seller commit to today. They do this because nobody actually knows what different companies are going to be worth months or years from now, and different people disagree on what the future value is likely to be. Sellers think they're locking in a higher price today than they get if they wait and sell later. Buyers think they're going to get a discount on stocks that will be worth more over time. Not only is there literally nothing wrong with this, futures trading has some broader economic benefits like stabilizing stock prices. But the question I had to stop the movie to shout at the screen and ask is what does any of this have to do with social media? No one is trading futures at all in this context, let alone human futures, which is actually gibberish. Digital advertising doesn't control what people do, and advertisers don't even pay higher or lower prices based on what they expect to happen in months or years down the road. They bid against other potential advertisers for the ability to get their content in front of people today. This line is only about 15 minutes into the film, and it should raise a big red flag for viewers. For all her supposed qualifications, it was immediately apparent to me that Zuboff wasn't a credible speaker. This is a former Harvard Business School professor who doesn't seem to understand basic financial concepts, and her credibility isn't the only one you should question here. A bit later in the film Tristan Harris, whose sole job at this point seems to be to promote his worldview everywhere he can, assures us that his anti-technology crusade is unique to modern history. No one got upset when bicycles showed up. Right? Like if everyone's starting to go around on bicycles, no one said, oh my god, we've just ruined society. But actually, yeah, that's exactly what people said. When bicycles were invented, a lot of people did make ominous claims about how they were destroying society, mostly because they gave women independent mobility and allowed them to travel without male company. This may not seem like a huge deal to the overall point of the film, but I think in some ways it's a decent metaphor for the fear of social media as a concept. Like the bicycle, this technology has empowered individuals to connect and communicate with each other without a gatekeeper telling them what they're allowed to do. The people who made the social dilemma don't seem to like that. In reality, all these sound bites are actually meant to terrify you, not inform. In fact, everything about the way it's produced is designed to make you afraid. From its creepy horror-like score, to its montage sequences of sensational news stories, to the fact that there are literally no contradictory voices presented anywhere on screen, to the ridiculous scripted family drama that presents social media algorithms as a trio of sentient clones maliciously manipulating the dials of an individual's user experience. For a film that talks a lot about psychological manipulation, it might be the most crudely manipulative documentary I've ever watched. And I've seen almost everything Michael Moore's ever produced. Honestly, half of it just felt like watching an after-school special from the 90s. We don't need our phones to eat dinner. I get what you're saying is just not that big a deal, it's not. But moving past the style and into the substance of the documentary, so much of this film is centered around the idea that advertising and the commercial function of social media is evil, in and of itself. Because we don't pay for the products that we use, advertisers pay for the products that we use, advertisers are the customers were the thing being sold. I sincerely doubt this is news to anyone, but the film devotes a ton of attention to the point. They're all these services on the internet that we think of as free, but they're not free. They're paid for by advertisers. Why do advertisers pay those companies? They pay in exchange for showing their ads to us. And of course, that's bad. It's the gradual slight imperceptible change in your own behavior and perception that is the product. So let me get this straight. Advertisers are trying to change our behavior? Who knew? Apart from everybody, I mean, but it gets worse. Apparently these ads are super naturally effective. Many people call this surveillance capitalism. Capitalism profiting off of the infinite tracking of everywhere everyone goes by large technology companies whose business model is to make sure that advertisers are as successful as possible. The argument here is that since social media companies have so much data about what we do, what we like and who we are, they're able to create nearly perfect prediction models that constantly serve organic content and advertising to people who have no control over how they react. Returning to our resident hyperbolist, Shoshana Zuboff. This is what every business has always dreamt of. To have a guarantee that if it places an ad, it will be successful. I'm honestly struggling not to be really sarcastic here, but just no. These are words that can only have been spoken by someone who has never run a business, created a marketing campaign or even placed a single ad on social media. Unlike most people who told me to watch the movie and certainly most of the journalists tweeting hot takes and writing think pieces praising it, I have run ad campaigns on some of these platforms. In addition to my work writing and producing videos like this one, I also oversee the Foundation for Economic Education's marketing team and I think a lot about how to get more awareness around all our content and events. And I can tell you one thing with absolute certainty. Zuboff is wrong. It may seem like these companies have all the data in the world, but that guarantee she talked about doesn't exist. Just because you can aggregate massive data sets and uncover some generalities about huge groups of people, individual human behavior isn't nearly as predictable as the social dilemma would have you believe. Turns out you're not a mindless robot being programmed by your phone to just buy whatever gets put in front of you. You don't even click on a fraction of the stuff that comes up in your feed. Want to know the average click-through rate for a Facebook ad? In 2017, it was 0.9%. So for all the talk in this movie about data models flawlessly predicting user behavior and allowing companies to easily manipulate people with advertising, the truth is that the algorithms aren't actually that good and people always have a choice not to click on ads that don't interest them anyway. And they don't. I don't. You don't either. We all blow by ads and sponsored posts all day long without clicking on them. And even when we do click, how much we engage with the actual content has to do with how good it is and how much it appeals to our unique preferences. Speaking as an occasional advertiser or just as a content creator, even when I know what audience I'm trying to target, producing engaging media is still really hard. Just think about how you watch videos on YouTube. It just doesn't work the way the social dilemma says it does. Yes, perfect. The most epic fails of the year. Perfect. That worked. Following with another video? Beautiful. Let's squeeze in a sneak grab before it starts. Yes, the algorithm played a major role in showing you that this video exists, but why did you choose to start watching it? The algorithm can't force you to click. And assuming you've made it this far, what's keeping you here? Are you powerless to leave? No, you're not. In fact, for most videos, a ton of viewers drop off after the first few seconds. If a majority keep watching over half of any video, that's pretty good. Everybody has a choice, and everyone exercises their choices online all the time. A few platforms, like Facebook, do offer better targeting tools to advertisers than broadcast television and local newspapers ever did, but that means nothing if a company's ad doesn't connect with its intended audience in the first place. And Facebook doesn't tell you how to do that. It doesn't tell you what titles or thumbnails are going to work the best to get people to give your content a chance when it pops up in their feed. It doesn't tell you how to write a great article, create a good meme, or produce an event that's exciting enough for people to sign up to attend. It doesn't tell you how to make a video that's worth continuing to watch for one minute, five minutes, 10 minutes, an hour. It doesn't tell you how to navigate a world where saying the wrong thing can spark outrage and boycotts instead of generating support. And as hard as it is to make good ads and great content, there's another issue here too. Different creators, companies, and interest groups are constantly competing with each other for your attention. Even though advertising is a part of the fabric of our lives, there's usually not one idea or product that's being sold to you all day every day. This isn't North Korea where the state controls all the messages you see everywhere you go. And most importantly, for all this fear-mongering about surveillance capitalism, let's acknowledge that, for the most part, better data just gives companies better insights into what you want and helps them serve your needs more effectively. I get why people worry about privacy. On the surface, I think we all care about this stuff to some extent. Whenever I get served an ad for something I've only ever talked about and never actually searched for, it creeps me out a little bit. If this bothers you a lot, it's always worth going through your privacy settings and shutting off access to your camera and microphone and restricting what information social networks can collect on you. But just know that it also means that you're going to get served more organic content and ads that waste your time. Being able to target people in more specific ways helps the advertiser, but it helps the people who see the ads even more. And just to be clear, advertising isn't a bad thing. It's how you know what goods and services exist in the world. It gives you ideas on how to solve problems that maybe you didn't know were possible. It allows fledgling businesses to find customers and compete with more well-known, better-established companies that don't need help with promotion or name recognition. Ads can also shape your life in positive, intellectual ways. If you're struggling with addiction, an advertisement might lead you to a community of people who can become your support system. If you just need a vacation, you might learn about an affordable way to go somewhere you've always dreamed of. In the end, you need to realize that for the most part, ads aren't nefarious at all. They're just another valuable communication tool. But one of the things that distinguishes advertising from propaganda is freedom of speech. The fact that we're all allowed to engage with multiple perspectives and choose to support the ones that are most persuasive matters. And that brings us to the second major thing I mentioned at the beginning, misinformation. The social dilemma often feels like a mea culpa for the results of the 2016 election. It leans hard into the idea that social media creates bubbles that make it easier for commercial and political interests to manipulate gullible people, tricking them into believing lies and hateful ideas they'd never believe if there were better, more enlightened gatekeepers for information. And while I think that's somewhat true, the film chooses to focus on entirely the wrong thing. While everybody's busy complaining about advertising fringe ideologies and comically bad attempts at election hacking, they're ignoring the giant beached whale in plain sight. Social media companies own policies on censorship and free speech. In one of the only really good moments in the film, mathematician and author of Weapons of Math Destruction, Kathy O'Neill says, I like to say that algorithms are opinions embedded in code. And that algorithms are not objective. Algorithms are optimized to some definition of success. This is super important, and the social dilemma barely stops to acknowledge it, let alone explore it in any depth. The biggest concern I actually have with social media algorithms is that they are a mysterious black box that tends to restrict certain types of potentially sensitive content while ignoring others. And companies like Facebook, Twitter, and YouTube selectively enforce their broad terms of service agreements in ways that are inherently weighted towards the biases of those in charge. This is the real problem. Not advertisers, not the focus on providing people content they actually find engaging and entertaining, not even privacy. The bubbles people have created for themselves have happened partly by their own design, which is mostly unavoidable. But also because of the social media companies themselves increasingly purging different points of view from their platforms. Recently, Facebook and Twitter prevented the New York Post from sharing articles, investigating political corruption, involving Joe Biden and the Ukrainian government on the stated basis that the emails contained in the article were obtained illegally. But courts have held that there's no liability for publishing illegally obtained information if that information is in the public interest and presented in a journalistic manner. And neither Facebook nor Twitter have enforced these same rules under different circumstances. For example, they never prohibited the sharing of John Podesta's emails, the Steele dossier, or countless WikiLeaks links. And in the case of the New York Post's story, whether or not it turns out to be true, the Post is 200 years old. One of the most widely distributed newspapers on the planet. And the story is undeniably in the public interest. If they got it wrong, they should take a hit to their reputation, but for the social networks to prevent everyone from sharing a link to their site is disturbing. Yet the people running our major social media companies didn't want you to see it. And this is far from the only example of this kind of behavior. We've seen a tremendous amount of censorship of dissenting views on how to handle coronavirus since March, with a lot of content presenting alternate policy suggestions or debate on the rapidly emerging science getting shut down. Some scientists and agencies' viewpoints got preferential support, while other credible experts' perspectives got restricted. So it often seemed as if there were no dissenting opinions at all. But of course there were. There's always a ton of good faith disagreement when we're faced with difficult scientific and public policy challenges, especially when there's limited information. But the range of allowable opinion was, and still is, actively limited by Facebook, Twitter, YouTube, and Instagram. For the last several years, YouTube has demonetized or blocked content about guns, war, and almost anything that's actually interesting to talk about. More recently, it's taken to deciding what's appropriate for younger audiences, but it's completely unclear what the rules actually are. The episodes we've done on V for Vendetta, Superman, and Carnival Row are restricted, and I really don't know why. I'm also almost certain that YouTube has effectively shadowband our episode on Mulan, because it launched with some of the best stats I've ever seen, jumping to 20,000 views almost immediately, with a super high click-through rate, high viewer retention, and a 99% like to dislike ratio. And then it just flatlined. And I've never had any kind of warning or strike against the channel. Going back farther in time, we've seen tons of really creepy attempts by social media companies to secretly manipulate people's feeds for science. Way back in 2012, Facebook quietly pushed hard news into people's timelines, prioritizing their favorite news sources over posts shared by friends and family. In 2014, they ran an experiment to see if they could affect people's emotions by pushing angry negative content to some users and more positive friendly content to others. Spoiler alert, it worked. Around the same time, Facebook A.B. tested different forms of I voted stickers, showing different groups more or less information about their friends' voting behavior to see if that influenced other people's choice to share posts about voting. Later in 2016, you might recall a big story about bias in Facebook's now defunct trending news section on the main feed. Back then, everyone I knew incorrectly assumed that the trending stories we saw were just the most popular ones as defined by user preferences, but that wasn't the case. Thanks to reports from Gizmodo and other tech-focused outlets, we learned that trending news stories were actually chosen by human employees and that they routinely suppressed conservative news sources. Worse still, the team that selected which articles to promote across the platform was actively training its AI replacement. Eventually, their selections were meant to teach the algorithm how to identify good news sources so that it would automatically promote their stories instead of needing employees to manually choose which got designated as trending or not. Going back to what Kathy O'Neill said, if algorithms are opinions embedded in code, what happens when the algorithms are written and trained by a group of people with almost no diversity of political or philosophical thought? I want you to seriously think about that. When these companies are dominated by one point of view and their decisions dictate what the rest of us are able to see in our feeds, and when those decisions are completely opaque and invisible to the general public, I think we should be concerned about that. And just like what happens in government when rules are unclear, complex, and give law enforcement tons of subjective discretion, the lack of transparency in the rules on social media make it very easy for the employees of those companies to shut down speech they personally don't like. It also pushes the people who aren't allowed to express their moderate viewpoints towards more conspiracy theories and more extreme ideologies. A big part of the reason people have tighter and tighter bubbles is because the social media companies themselves don't seem to be interested in diversity of thought. That's a major problem. Unfortunately, the social dilemma is much more interested in talking about pizza gate and advertising than it is about discussing the ideological biases of the people actually writing the code and enforcing the terms of service. It's hard for me to imagine how this is even possible, and it's maddening to watch. Again and again, the social dilemma gets close to making an important point about a serious issue, but ends up focusing on the wrong things and coming to the wrong conclusions. At one point, Justin Rosenstein, who's credited as the inventor of the like button, admits that his team had no concept of the broader consequences of what they were doing. When we were making the like button, our entire motivation was can we spread positivity and love in the world? The idea that fast forward to today and teens would be getting depressed when they don't have enough likes or it could be leading to political polarization was nowhere on our radar. I've talked about failures of central planning a hundred times on this series, and here is yet another example. But if the film can understand that there were unintended consequences to allowing people to like stuff on Facebook, why does no one even question the idea of giving a handful of people at these companies the power to set the boundaries of acceptable speech for the whole world? Can it really be true that nobody is capable of seeing how this state of affairs could have really serious side effects? It's not even hard to see why some people would react poorly to getting silenced by the same tech industry goons who think they should be allowed to dictate what Joe Rogan can talk about on his hundred million dollar podcast. And yet, these issues never come up. Creators trying to get you to care about their content and advertisers offering to improve your life with a new product or service using social media's audience targeting tools aren't the problem here. The problem is that the more these companies try to control the range of acceptable expression, the more disconnected their users actually get. From each other, from differences of opinion, and ultimately from objective reality itself. As I've said repeatedly on this series, nobody is the sole arbiter of truth, and the more these companies presume to decide which people are right or wrong, the less power we all have to openly discuss and debate complex, important ideas. That's not okay with me. As far as I'm concerned, we all need to put a lot more pressure on companies like Facebook and YouTube to get out of the business of trying to police everybody's speech. We can apply this pressure both by voicing our opinions on these issues and by choosing to use competing services. Now, there's so much more I want to talk about, but instead of forcing my poor editor to try to cut together a 45 minute video in a week, I'm going to end this here and come back to the social dilemma on the next episode. We still need to talk about social media's effect on people's psychological health and the nonsensical set of supposed solutions proposed by this film. I'm just getting warmed up, so stay tuned for part two. Hey everybody, thanks for watching this episode of Out of Frame. The state of social media is an incredibly important subject and I'm looking forward to talking with you all about it in the comments. Feel free to tag me down there and I'll do my best to reply as much as I can. Also, I'd like to send some love to all of our patrons with an extra special thank you to our associate producers, to Connor McGowan, Dallin Case, Jermaine, Himantana, Matt Tabor, and Victoria Manschart. Thank you. But I've also got some awesome news for everyone brought to you in the form of a dreaded advertisement. The Foundation for Economic Education is hosting an incredible webinar series during Global Entrepreneurship Week this year. Starting November 16th, our Entrepreneur Week event will feature four days of amazing speakers beginning at 5 p.m. Eastern Standard Time every day. I'm leading a session on creativity on day two that's going to include my friends Kevin Lieber of Vsauce 2 on YouTube, Assassin's Creed Syndicate and Journey composer Austin Wintery, and HBO's Lovecraft country artist Afua Richardson. But there's going to be great stuff happening all week. If you're interested in attending any or all of these webinars for free, check out the link in the description. And as always, be sure to like this video and subscribe to the channel, hit that bell icon, listen to our behind the scenes podcast, and follow our brand new out of frame accounts on Twitter and Instagram. But only if you want to. It's your choice. Remember, I'll see you in part two.