 As many of you know, UVM has now consolidated its four major lecture series, the Aitken lecture, the Zeltserman lecture, the Burrack Lectures, and the Janus Forum under the single heading of the Presidential Lecture Series. And each year now, we choose a theme to organize those four lecture series. As those of you who have guessed by attending other lectures this year, this year's theme is on social media, and we are very fortunate to have an Nora Draper, an expert on several aspects of that. I've had the honor of chairing the committee that's tasked both with choosing the annual theme and then choosing lectures for that theme. And I just want to quickly thank all of the people who have been involved in this effort. David Geneman from the Honors College, Chuck Schnitzlein from the Grossman School of Business, Zach Petowitz, our student intern, Helen Morgan-Permitt, and Justin Morgan-Permitt from the English Department, Tom Borchert, who can't be with us today because of a faculty senate meeting. He is from the religion department and is the president of the faculty senate, as well as Jennifer Hurley, Tom Krause, and Betty Boucher. I sometimes feel like the university should be paying us for the privilege of doing this work. It's great fun getting to choose topics and trying to find people who will come and talk with us. So when Dave and Carol Burack established the Burack Lecture Series, they stipulated, and I'm going to quote here, that it bring to campus, quote, scholars, scientists, artists, and writers who are acknowledged as preeminent in their disciplines. Professor Nora Draper more than meets that criteria. She's known for her plunge into fraught issues surrounding digital privacy, surveillance, reputation, identity, and promotional culture. She's possibly best known for her book titled The Identity Trade, Selling Privacy and Reputation Online, which was published by New York University Press. Various reviews of the book have described it variously as, quote, an astute meditation on how industry can shape cultural logics in profound ways, and quote, essential reading for anyone interested about privacy, their vulnerability to data breaches, and the myriad other identity pitfalls that come along with online life as we know it. So please join me in welcoming this afternoon Professor Nora Draper for her talk, Privacy, Resignation, How Digital Platforms Confuse, Frustrate, and Disempower Us. Welcome Nora. Thank you very much. I am delighted to be here today, and I want to thank the Presidential Lecture Committee, particularly Professor Geffert for the invitation and Michelle Clark for handling all of the communication and arrangements for this visit. I have to say I'm particularly pleased to be following up on Professor Zane up to Fetchke's talk earlier this month. I understand that she was here to talk about what we have learned in our research on social media over the past couple of decades, and I hope that my talk today will pick up on some of the themes that I'm sure she discussed in that talk. Now, broadly, this talk is going to examine the ways that digital media technologies and social media platforms in particular are changing our information landscape in ways that have profound consequences for individual autonomy, social cohesion, and democracy. Now, there are lots of ways that one might approach that issue, but today I plan to talk about the role that digital advertising plays in the contemporary media environment. The title of my talk, digital or privacy resignation, how digital platforms confuse, frustrate, and disempower us, highlights a core finding from research that my colleagues and I have done over the past decade, on the ways that digital platforms erode our privacy and at the same time discourage us from taking actions that might help to protect us from increasing corporate surveillance. But before we get to any of that, I want to begin with a question that I suspect many of us have had at different points over the past few years, is my phone listening to me. So for many of us, this idea that our devices are listening to us comes from digital advertising. The hyper-individualized nature of digital advertising has given rise to this kind of pervasive sense that the digital devices that have become an integral part of everyday life for almost all adult Americans and increasingly children as well are monitoring everything that we do, including what we say. So let's consider a somewhat familiar scenario. So you meet a friend at a coffee shop and they tell you about a new local restaurant that they love. The next day you log into social media, let's say into Facebook, and you see an advertisement for that new restaurant. Now you know that you didn't search for that restaurant online and you didn't communicate or add that restaurant on social media. So the only time that you really even thought about that restaurant was when your friend mentioned it during your coffee date. So you come to the, I think, very reasonable conclusion that your phone's microphone must have been listening to you and heard your friend mention this restaurant and sent you the ad. With so many people reporting different versions of that story, there is now this sort of pervasive belief that the apps on our smartphones are indeed listening to us and using the information that they collect to serve us advertisements that are hyper-targeted to our specific interests. That belief, however, seems to be based on a myth. Facebook has repeatedly stated that they do not engage microphones or record conversations for advertising purposes and that claim has been confirmed by former Facebook employees, even those who have been very critical of the company's activities after leaving Facebook. And if that's not enough evidence, which would be understandable, independent researchers have also tried to run experiments to confirm that apps like Facebook are listening to us through our smartphone microphones and they have found also no evidence to support those claims. So while it is technically possible for an app to do something like this, it does not seem likely that that's happening. So at the most basic level, the idea that social media apps like Facebook are listening to us through our phones is a myth. But the reality is that Facebook doesn't need to turn on our microphones. The company has so many other ways of monitoring and tracking us that they don't need to listen to us in the most literal sense. Let's consider some of the other information that Facebook could have used to serve us that advertisement about our friend's favorite new restaurant. The most complicated way is through location services. So let's say you and your friend are also friends on Facebook. If both of you and both of you and your friend have the Facebook app downloaded on your phone and you have location services turned on, Facebook would know that you were together. And if your friend interacted with that new restaurant online, liking them on Facebook, following them on Instagram or even searching for them on Google, Facebook would likely know that. And so the app's algorithm might assume that since your friend's on Facebook and you're hanging out together offline that you might share similar interests. And since your friend showed an interest in that new restaurant, Facebook might conclude that you are likely similarly interested and serve you that advertisement. Now even if you have location services turned off, if you and your friend are connected on Facebook, the app's algorithm might conclude that you share similar interests and show you the restaurant's ad and the timing might just be a coincidence. But let's say that you and your friend are not connected on Facebook. The app might have noticed that people who share demographic and psychographic characteristics with you and your friend have shown an interest in that new restaurant. And that might be enough for the app to show you the advertisement. Or at the most basic level, the restaurant might have simply asked Facebook to show their advertisement to people who share your demographic traits, interests, and geographic location. Now there are some things that we can do to minimize the impact of this type of monitoring. We could turn off our location services. We could remove apps from our phone, not only social media apps, but also those apps that share data with social media apps, which frankly is a lot of them. We could use the options that Facebook provides for us to limit the categories that the company considers when they send us targeted advertising. But the fact is that all of those actions have limitations. None of them will really stop Facebook from collecting information about us or sending us targeted advertisements. We could decide to opt out of Facebook altogether, but even that has limitations. For many of us, getting rid of Facebook or other apps like Instagram is not a viable option. It's how we connect with family and friends. It's how we find out about local events. It's how we run our small businesses and community organizations. And it might even be how we find the best local deals. That's what my students seem to use Facebook for almost exclusively, is kind of local shopping. And fully extracting ourselves from the Facebook universe, or technically its parent company, Meta, would also mean getting rid of Instagram and WhatsApp. And even if we did all of that, the company would retain the data that it already has collected about us. And Facebook has embedded itself so completely around the web through things like like and share buttons on web pages, login pages that allow you to use your Facebook credentials to login to different sites, and the analytics that it uses for advertising, that even if you don't have a Facebook account, the social network can collect information about your behavior anyway. And this is not just Facebook. Every day, new stories are coming out that detail how every day digital technologies from QR codes to push notifications to the network devices in our houses and cars are eroding our privacy and making us less safe. So this is the current landscape for digital privacy, a complex network of personal data, production, sharing, and use, to which we have limited opportunities to understand, manage, or object. Over time, this idea that we allow companies to collect information about our online and offline behaviors and use that information to draw increasingly detailed pictures of who we are and what we like has been normalized. Digital advertisers argue that we, as consumers and users of digital services, agree to these terms. They speak about rational trade-offs, claiming that people understand the costs and benefits of disclosing personal information to companies and have largely decided that the advantages outweigh the risks. But research tells us that the idea that people have agreed to trade information about themselves for access to free content and services is a fallacy. Through years of public opinion research, my colleagues and I have found that most Americans do not agree to this supposed bargain, the deal that has us offering up access to reams of personal and behavioral information in exchange for access to content and services. In the most recent of these studies published just last year, we asked respondents if they agreed with various trade-off scenarios that reflect everyday data collection principles. And we found that in almost all cases, the majority of Americans, and sometimes a large majority, reject these trade-offs in principle. Strikingly, 88% of respondents disagreed with the statement, if a company gives me a discount, it is a fair exchange for them to collect information about me without my knowing it. Now even while the majority of Americans don't accept the idea of trade-offs, a large proportion still say that they would be willing to give up their data in exchange for some benefit when faced with the option to do so. For example, in this study, when we presented survey participants with a scenario that offered them discounts for providing a supermarket they frequent with personal information, about half, 47%, said that they would agree to that deal. But less than half of those who are willing to trade their data for discounts also said that it was okay for a store where they shop at to create a picture of them in return for those benefits. So what's going on here? Why would people agree to give up their data even when they do not believe that this is a fair trade? Marketers have tended to explain this apparent inconsistency in terms of the privacy paradox. That is the idea that people say they care about privacy in principle, but when it comes to real-world behaviors they act in ways that undermine those claims. In other words, people like the idea of privacy, but in their real lives they're routinely willing to forego privacy in exchange for various benefits. Our research, however, shows something different. In an environment that is saturated by digital surveillance people submit to being watched, not because they agree with the practice or because they've done some sort of cost-benefit analysis and decided the benefits outweigh the risks, but because they are resigned. My colleagues and I first developed this idea of digital resignation almost a decade ago in a 2015 report. In that study, we asked people whether they agreed or disagreed with a series of statements and we kind of sprinkled these throughout the survey. And among those statements were the following two. I want to control over what marketers can learn about me online and I've come to accept that I have little control over what marketers can learn about me online. When participants agreed with both of those statements we categorized them as resigned, a condition that occurs when a person believes an undesirable outcome is inevitable but feels powerless to stop it. In our 2015 study, 58% of respondents agreed with both of those two statements. We also found that many of those people who are resigned tend to reject trade-offs in theory but were just as likely as those who approved of trade-offs to take the deal when confronted with the choice to subscribe to a corporate loyalty program. When we repeated these questions in a 2018 survey we found that 63% of respondents met our definition of resigned and in the 2023 study that I mentioned earlier we found that 74% of respondents could be categorized as resigned. Now, based on this work we define digital resignation as the condition produced when people desire to control the information digital entities have about them but feel unable to do so. And we argue that digital resignation can be seen as a rational response to situations in which an individual feels powerless to control the production, spread, and collection of their digital data. Those who are resigned may feel that they lack sufficient influence or capabilities to achieve their desired outcomes. And because people feel frustrated by the futility of their actions they're more likely to engage in inaction or sporadic action than in sustained efforts to protect themselves from data collection that results in targeted advertising. Now, this feeling of resignation is not unique to the United States. Colleagues in Europe and in East Asia have found similar trends what they've called privacy cynicism, futility, or fatigue. And it's not great for individuals. Although it can kind of act as a coping mechanism in situations where control feels and often is elusive it is not empowering and does not lend itself to the type of autonomy that most of us would prefer. And digital resignation can be very beneficial for companies. Notably, people who might otherwise object to the exchange of information for services may feel that they have few alternatives but to engage. And they may even feel that within a system where their own power is so compromised that the most empowering thing that they can do is to take advantage of any available benefits. In this case, allow for the collection and processing of their information that provides them with tailored content, deals, and access. Now, intentional or not there are several routine corporate practices that encourage this sense of digital resignation. And the first that I want to talk about is a practice that is referred to as dark patterns. And dark patterns are design choices that manipulate or heavily influence users into making choices that align with the designer's interests but not necessarily with their own interests. Some of these will be very familiar to you. For example, those advertisements that pop up when you visit a web page or an app with the Xboxes to cancel out of them that are either shaded in such a way that you can't see them and you spend a bunch of time looking for them and that counts as advertising engagement. Or they're so small that when you go to click on them you accidentally click on the ad and then you're taken to the advertiser's page again making money for the advertiser. Or the company that allows you to sign up for a subscription online but then requires that you call or email them to cancel that subscription. So those are examples of dark patterns. When it comes to privacy, websites use dark patterns to mislead users into granting consent to be tracked or allowing the company to use their data in ways that they don't expect or want. Many websites and apps point out that they do provide users ways to opt out of being tracked but their use of misleading language or design choices often make it difficult to figure out exactly how to do so. Unsurprisingly, companies whose revenue relies heavily on user data don't want to make it particularly easy for users to refuse to provide that information. As a straightforward and probably very familiar example, many websites now have cookie consent pop-ups that are consistent with privacy regulations in Europe and some U.S. states. Websites will tell you that their site uses cookies, small files that allow the website to keep track of your activities on that site but also as you move about the web. I ask you to accept usually by clicking on a big prominent brightly colored icon but if you don't want to accept, if you want to refuse those cookies, there's rarely a button that gives you that option. Instead, there'll be a kind of hyperlink that again is kind of shaded in a way that makes it difficult to see that might say something like learn more or options and if you click on that link, you then are directed to a page like this where you have to sort through a menu of different settings and disable them manually. And even if you are a person who understands exactly what these types of cookies does, the difference between analytics and preferences and unclassified cookies, most of us don't really have the time or the desire to do this for every single website or app that we visit. Now, dark patterns have come under scrutiny in recent years, both the Biden administration and the Federal Trade Commission have pointed out the problems with dark patterns that confuse and frustrate people into allowing companies to track their online activities. The other tactic that I want to talk about that companies use to encourage a sense of digital resignation is seductive surveillance. This term was coined by a Greek scholar named Penelope Triliano. And she describes seductive surveillance as the ways that companies encourage users to submit to invasive surveillance practices by making that surveillance a friendly and enjoyable experience. In some ways, this is actually kind of the inverse of dark patterns. Where dark pattern serves to frustrate users, seductive surveillance occurs alongside the kind of pleasurable experience that we have when we are known, seen, and understood. So even when we might be kind of skeptical or uneasy about the idea of corporate surveillance, those feelings can be somewhat mitigated by the special relationship that we feel with our digital devices and services. And I put the TikTok logo up here because I think TikTok is an app that has kind of perfected this feeling of seductive surveillance through an algorithm that is so closely attuned to the behaviors of its users that researchers have found that people frequently talk about the platform as knowing them better than they know themselves. And I know, like, my students talk about having this experience of, you know, the app kind of anticipating things that they didn't even know that they were interested or aspects of their personality or identity that they weren't even aware of. And I think there is a kind of sincere pleasure in this sense of being recognized or seen. But that can also obscure the ways that those data that allow for personalized videos or targeted friend recommendations can be used for more insidious purposes as well. Now, I would suggest that both dark patterns and seductive surveillance are necessary for digital resignation to occur. If dark patterns cause users to become frustrated or even angry, seductive surveillance pacifies them into a feeling that even if little can be done to resist these systems, at least there are some benefits to be had. One of the things that dark patterns and seductive surveillance reveals is the problem with the current framework for privacy protection in the United States. Since the very early days of the Internet, regulators have relied on a combination of industry self-regulation and individual self-management to deal with privacy online. What this has meant is that websites post documents with misleading titles like privacy policy that detail the information they collect about users, and then individual consumers are tasked with understanding those policies and making decisions about whether to use a website. This regime is known as noticing consent. The American model differs here from the European approach in which consent must be explicit. A person must actually opt in to having their data collected and used. In contrast, while Americans generally have the opportunity to opt out of data collection, in most cases, consent can be implicit, which means that simply by visiting a website or an app, a person is consenting to their data collection practices. Over the decades, many scholars specializing in legal and philosophical aspects of technology have disbared that the noticing consent regime puts too much responsibility for privacy protection on the individual. They have worried that noticing consent frameworks don't provide people with the transparency and control over commercial data about them that regulators had hoped for. They also, noticing consent, also fundamentally treats privacy as an individual problem when privacy is actually much more of a social issue. One's own privacy depends on the privacy of others, which is not something that this notice and consent framework can allow for. So the argument that I ultimately want to make today is that the current approach in the United States to information privacy, one that is based on individual action and operationalized through a noticing consent model, is insufficient to handle the complexities of the digital environment. And more specifically, what I want to argue is that for the vast majority of Americans, meaningful consent is not possible in the current digital environment. If we look at frameworks for consent that emerge out of clinical medical research, we can extrapolate that two things would be necessary for an individual to be able to make an informed decision and meaningfully consent to the collection and use of their personal data. The first is that that person would have to understand corporate practices and policies related to the data that companies collect about them, so knowledge. The second is that a person must believe that the technology companies will give them the independence to decide whether and when to give up their data. If people don't fit into either or both elements, it indicates that their consent to companies' data collection is involuntary, not free, and ultimately illegitimate. Now, our research, which I'm going to return to here for a second, finds that for the vast majority of Americans, neither of those conditions are met. So the first part of that consent model, as I said, is knowledge. And our research reveals that large numbers of Americans do not know important facts about the online world that are required to help them navigate the digital landscape effectively. And we found this kind of knowledge, issue with knowledge across all of those surveys dating back to 2015. But in the 2023 survey that we conducted, we found that less than half of the adult population, only 44%, understands that the phrase privacy policy does not indicate that a site won't share a person's information with other sites. As I've said, the presence of a privacy policy is the site sort of telling you how information is going to be used, under what conditions the site will share or sell your information. But most Americans think that the fact that there's a privacy policy on a website means that that website is going to safeguard the user's privacy, which is not the case. Also, we found fewer than one in five Americans know that the Federal Health Insurance Portability and Accountability Act, or HIPAA, does not prevent apps that provide information about health, including dieting apps and period trackers, from selling data collected about app users to marketers and other third parties, including insurance providers. But perhaps, and just to kind of circle back to what I was talking about at the beginning, almost half of respondents believe incorrectly that social media platforms activate users' smartphones to listen to conversations and identify their interests in order to sell them ads. But what I think is even more telling in these knowledge questions that we asked was that people weren't just kind of getting the answers wrong. Large numbers of the respondents that we were hearing from, when we asked them is this true or false, we also gave them the option to say don't know. And very large percentages of people were simply saying, I don't know the answer to that question. And what we think is that that really reflects the overall confusion that many people have about how the digital environment works. So that's knowledge. The second idea that we talked about, or I talked about with relation to this idea of meaningful consent, is that a person must believe that technology companies will give them the independence to decide whether and when to give up their data. Our research showed that almost all Americans want to have control over what marketers know about them, but feel that it would be naive to believe that they could do so. Nearly three in four Americans say they don't have time to keep up with the ways to control what companies can learn about them online. And nearly 60% agreed with the statement, I do not understand how digital marketers learn about me. In addition to finding that Americans are frustrated and confused about managing their digital data, we found low levels of trust in online companies. Only 28% of Americans agree that they trust companies they visit online to handle their data in ways that the individual would want. And an even lower number, only 14% agrees with the statement, companies can be trusted to use my personal data with my best interest in mind. So all of this paints a picture of a citizenry that is mistrustful of corporations, confused about how to protect themselves, and resigned to the misuse of their personal information. Now interestingly, one of the things that we found in this study was that the vast majority of Americans want Congress to act on this issue. We asked how urgent is it for Congress to regulate how digital companies use personal information and fully 79% of Americans said that it was urgent with 53% saying that it was very urgent and only 6% of people saying that it was not at all urgent. So I mean, this is like a rare bipartisan issue. If I were in Congress, I might jump on the opportunity. Americans seem to understand that they have no real ability to meaningfully avoid marketers' data gathering. And in addition to showing that large percentages of Americans know little about key data practices and policies, the research that my colleagues and I have done shows that Americans acknowledge that they know little, deeply mistrust companies to help them, are resigned to the reality that firms will take and use data about them without their permission, and believe that firms doing that can actually harm them. It's not surprising then that Americans see federal government help as necessary now. So based on these findings and their relation to deep discussions among scholars regarding this issue, my colleagues that I have argued that consent, whether opt-in or opt-out, should no longer be allowed to trigger data collection. In fact, we suggest that any response to this issue that relies on individuals to make decisions and take actions to protect themselves against unwanted data collection is going to fail to adequately protect consumers in the digital environment. The landscape is simply too complex and our lives are too busy for people to gather the necessary information to make the type of informed judgments that such approaches require. Instead, what we recommend is a ban on information-driven targeted advertising and the sale of data about individuals for marketing use. Companies could still use contextual advertising, that is, advertising that appears alongside relevant or related content, but we suggest that personalized targeted advertising should no longer be part of our digital lives. Now, before I conclude, I want to connect this back to a point that I made at the top, that these issues have profound consequences for individual autonomy, social cohesion, and democracy. One might listen to everything that I've said here and conclude that commercial surveillance of the sort that I have been discussing here really isn't a big deal. In fact, one might note that they enjoy getting relevant content in the form of news, advertisements, videos, discounts. Perhaps they are a victim of seductive surveillance. But it's important to note that these same technical systems that are used to determine if we like to buy scented or unscented laundry detergent or to predict our interest in a new local restaurant are not limited to pushing advertisements for consumer products. The same systems that allow Facebook to send ads for vacation packages to people it predicts might be interested in a sunny getaway may also be used to target housing and employment ads on the basis of gender and race, as happened as recently as 2019. These same systems might be used to identify users who have shown an interest in conspiracy theories and send them targeted disinformation messages about political leaders, as happened in the lead up to the 2016 and 2020 elections. These same systems may be used to identify members of a particular identity group, such as black Americans, and send them disinformation created specifically to play on legitimate fears about the history of medical racism as a way to encourage vaccine hesitancy, as happened during the height of the COVID-19 pandemic. Or they might be used to identify and target Spanish-speaking Americans with information about the historic misuses of census data by the government to discourage participation in the U.S. census, as happened in the lead up to the 2020 census. Across all of these examples, we see the ways that digital data are used to analyze and categorize us, to predict our interests and fears, and send us targeted content in ways that limit our autonomy and disrupt opportunities for self-determination. These largely invisible practices craft our media environments in ways that shape and reinforce a particular view of the world. In rethinking how we respond to commercial surveillance, we are also considering how we maintain and repair our democratic systems. Thank you again for the opportunity to share this work with you, and I look forward to your questions and comments. Can you just clarify what you said at the end about I didn't quite catch the distinction between what you think is legitimate use of data versus what you think should be banned or prohibited? Yeah, so the distinction between targeted or tailored advertising and contextual advertising. So contextual advertising is the idea that advertisements could appear. Let's say you are, let's say you're L.L. Bean, okay, given where we are, Patagonia, even better. Patagonia, L.L. Bean is a New Hampshire thing. Let's go with Patagonia. And you want to advertise outdoor clothing, right? So contextual advertising means that you might put an advertisement on the site of a popular hiking blog or show an advertisement on the All Trails app, right? Knowing that people who are going to look at a hiking blog or downloading the All Trails app are probably going to be interested in outdoor gear, right? You wouldn't necessarily know anything about the person who was going to that site except that they have an interest or likely to have an interest in your products. Targeted and tailored advertising means that what the advertising system would be doing is noting, let's say, that you looked up a Patagonia shirt on the Patagonia site and then they would target advertisements at you regardless of where you went around the web. So if your next stop was at ESPN.com you could get the Patagonia advertisement there. If your next stop after that was the New York Times you could get the Patagonia advertisement there. So targeted and tailored advertising is much more and especially the kind of hyper-personalized advertisements that the web allows for is really about reaching the person no matter where they are. Whereas contextual advertising advertisement is about reaching an audience that is likely to be interested in the products and services that you're offering. Do you believe social media companies would survive that legislation and would web commerce survive that legislation? Yeah, so I think, I mean, one of the interesting so there's a few different kind of moving parts here I guess to share my own bias I'm less concerned about whether social media companies would survive and more concerned about whether things like traditional journalism would survive. And the New York Times actually has done a version of this in Europe. I think probably anticipating some of the more restrictive privacy legislation that exists in Europe. The New York Times moved back to contextual advertising and early reports on that suggested that they did fine. That the advertising that was sort of contextual on their site was just as lucrative as the more kind of tailored and targeted advertisements. I think there is a question about, you know, so the New York Times is a fairly established, I mean, certainly there are concerns about traditional journalism and its ability to survive in the current information ecosystem, but nevertheless, the New York Times is a fairly, you know, robust news organization. I think there is a question about whether or not what I'm suggesting would have a negative impact on smaller businesses and smaller publications. And I think one of the things that advertisers have really kind of taught us to believe over the past several years is that through the kind of generosity of advertising we get the free and open web that we have today, right? The reason that you don't have to whip out your credit card every time you get to a new website is because of advertising support, advertiser-supported content. And I think that one of the things that that argument does is undermine the ways in which this kind of advertiser-supported ecosystem has undermined some of the things like traditional journalism. So as an example, what tailored and targeted advertising has allowed companies like Facebook to do is to really kind of divorce content from the publications or publishers where that content is housed. So it doesn't really matter to, again, let's take Patagonia. It doesn't really matter to Patagonia if that advertisement is reaching you on The New York Times or it's reaching you on someone's hiking blog or frankly it's reaching you on a kind of extremist blog. Patagonia in particular might actually care about that a little bit, but generally speaking, advertisers have been relatively agnostic about how their advertisement reaches you as long as it is reaching you. And what that's done is to undermine a lot of the traditional news that we have. The New York Times and the Wall Street Journal and the Washington Post can no longer say, we have really engaged readers who are going to read stories in depth and pay attention to what we're doing and you can show your advertisements alongside those stories because now the advertisers can reach you no matter where you go. And so there's kind of been this flattening of content that has been allowed by this advertiser ecosystem, the kind of hyper-targeted and hyper-tailored ecosystem that has taken place. So I think it doesn't directly answer your question but I think the pushback to this idea that we have a free and open internet because of the kind of generosity of advertisers or the ecosystem that those advertisers have created is an argument that can obscure some of the less positive things that have happened as a result of that ecosystem. Do ad blockers help as far as protecting some of your privacy at least a little bit because there are some sites that require you to disable it in order to be able to appreciate the content or are ad blockers just like, help you not to be so annoyed by ads? Yeah, so I think it's a great question and I think it really depends on what the function of the ad blocker is. I think sometimes ad blockers will not show you the ads but they don't necessarily interrupt the underlying collection and exchange of information. So in that way they can kind of maybe act as a little bit of a band-aid. And I also think, again, it speaks to this kind of individual versus collective issue. Individuals using ad blockers can make them feel less annoyed and may even protect them from some of the data-gathering practices that are going on, but doesn't necessarily solve the broader kind of social issue which has to do with the ways that people are being categorized and kind of analyzed and targeted based on their propensity to engage in particular types of behavior. It's a really good question. I mean, one of the things that's really interesting is I did a little bit of research looking at the discourse around ad blockers, and ad blockers use the idea of control, right? So this idea that you can kind of control your information environment, you control the way that your data is collected. And in the pushback against ad blockers by advertisers, the advertisers also use the language of control and they talk about the ways that people should be able to kind of understand and control their information environment by knowing about the products and services that are related to them. So it's all, on both sides of the discourse, there's this idea that individuals should be in control, which is something that we really, my colleagues and I have tried to push against that notion of individual control because it doesn't really solve the more social problem. Thank you for that question. Are you interested in hearing your take on the current TikTok controversy where the government is saying we should force a sale of it and others are saying, well, all these platforms have similar issues? Yeah, thank you for that question. So I'm not a legal scholar, so I'm going to wade into this a little bit carefully, but I will give you my take. So I think on the one hand, there are rules in the United States about media ownership, right, and who can own media companies. And as I understand it, there's citizenship or at least residency requirements when it comes to ownership of media platforms and publishers. And one of the interesting questions there is, is TikTok a publisher? Is Facebook a publisher? Is YouTube a publisher? And it's an open question in part because the media platforms, these social media platforms seem to want it both ways. On the one hand, they want to be publishers. They've recently argued in their case in front of the Supreme Court pushing back against the Texas and Florida laws about content moderation, which Texas and Florida have passed laws saying that social media companies cannot moderate content when that content is political speech very broadly defined. And the companies can have said, of course we can because we're publishers, right? We can moderate that content in the same way that the New York Times can moderate content or that the Washington Post can decide what gets printed in the pages of the newspaper. And on the other hand, that was this year, last year, in a case called, I think Gonzalez vs. Google, which was also argued in front of the Supreme Court, Google, in this case with respect to YouTube, wanted to claim that they weren't publishers because that case was about the role that the algorithm was playing in suggesting extremist content. And so in that case, they said, no, no, no, we're not a publisher like the New York Times. Our algorithm is not kind of purposefully pushing content. That's outside of the kind of purview of what we're talking about. So the question of whether or not these platforms are publishers, I don't think is kind of settled at this point. So on that side, I'm not really sure kind of how to answer the question. But on the other side, which you alluded to, I do think that there is, the kind of focus on TikTok is a bit of a red herring. There are concerns about foreign ownership, in this case, Chinese ownership of TikTok and questions about how information data might be used, but also concerns about how a platform like that might be used to promote propaganda. And there are kind of national security questions here about whether or not an app like TikTok could be used to kind of encourage particular messages and suppress other messages in ways that might have meaningful consequences for democracy or national security. And I think while I understand those concerns, I also have those concerns about Facebook and about Twitter and about Instagram. All platforms that have been shown to have less than... that have not shown themselves to be able to necessarily rise to the challenge of misinformation and disinformation and those types of issues. So I think the focus on TikTok probably is obscuring what's happening across a lot of platforms. And I would be disappointed to see something happen with TikTok, a ban on TikTok or something like that in place of more robust privacy protections at the federal level that would impact all of those platforms. Thanks. So I have a crystal ball question. So about the federal legislation, I know we've been close a few times passing it. Our lack of a federal regulation has forced states to do their own. And in fact, Vermont just passed theirs on Friday, passed ours on Friday. Do you have any idea, like, do you think we'll be close? Do you think it's something that will eventually happen and we'll cover things like TikTok, as you just mentioned? Yeah, it's a great question. And I certainly... I don't know the answer except to say that we have... This is not the first time that we have been close to passing this type of regulation. There was actually a moment in the early 2000s right before September 11th, where there were actually several bills that were... had been put forward to kind of regulate control, how data could be produced and collected online. And in the wake of September 11th, almost all of those bills kind of disappeared because there was this idea that national security trumps the desire or the need for privacy. So I think these... I mean, I also feel like we were pretty close in 2016 around the time that Cambridge Analytica was happening and there was this kind of concern about how data could be used in the context of democratic elections and those types of things. So I don't know. There's certainly, I think, AI lends itself to a little bit more maybe urgency that regulators and policymakers feel around this issue. But the one thing I will say is that I think there is some concern, particularly among states that have passed their own privacy legislation, that what might happen is that a federal... any sort of federal law that might supersede what the states have done because of the amount of lobbying that takes place at the federal level on behalf of these platforms, that the federal law might actually be weaker than a lot of the state laws, in particular, California, Illinois. I don't know the details of the Vermont law, but I'm excited to look into that. That if something gets passed at the federal level that is weaker than those state laws, that that might actually, in some cases, be kind of a step back. So again, it doesn't answer your question, but this is not... except to say that this is not the only time we've kind of been close. Yeah, but I'm hopeful. Hi, thank you so much for being here. I had a question about the difference between generations, and if your research showed a difference in resignation between some of the older generations and with our current Gen Z as we're sitting here on a college campus. Yeah, it's a good question. I don't think it did show a difference. I'm trying to think if we looked kind of across demographic variables and nothing popped out. So I think the answer to that question is no, although I would have to go back and check the data to be sure. I mean, one of the things that our data... we didn't look at that in any of the studies I talked about today, but in earlier studies, one of the things that we found is that young people, and I will say at the time we were doing this, young people was like millennials. So this is a little bit older research. But there was this kind of discourse that was around about 10 years ago, maybe 15 years ago, that young people don't care about privacy. Like the research that we did 10, 12 years ago looking at this found that that was not true. Younger people often talk about privacy in different ways. So for example, young people are often more concerned about social or interpersonal privacy, what they can kind of... how they can manage information that gets shared with their parents or gets shared with their friends or gets shared with employers. Whereas older people are oftentimes more worried about privacy, information that gets shared with governments and companies and those types of things. But there is not the kind of dramatic distinction between young and older people caring about privacy that I think we used to hear a lot about. And the fact that we don't hear a lot about that anymore maybe means that some of the research is kind of getting through, which would make me happy. But yeah, I think it's a good question. I don't think we have... I don't think we've seen dramatic differences in age. Yeah, but I'm going to look into that. A question. I'm good to think. Thank you for being here. You mentioned the effects of this resignation condition on political and cultural behavior. Looking beyond that, do you find in your research privacy resignation that the same fatigue can also lead to a higher susceptibility to cyber crime? Oh, that's an interesting question. It's not something that we have... It's not something that we've looked into. Like cyber crime, are you thinking there about like scams and those types of things? Fraud. Fraud, yeah. Yeah, I mean, I don't know that digital resignation... I don't have any information about the role that digital resignation would play in that, but I will say that one of the things that I just was reading about recently is that as I talked about at the end, all of these sort of systems that... or all of these sort of uses of the same infrastructure, the same digital infrastructure that allows for targeted advertising, right? So I talked about targeted disinformation and targeted political content and those types of things. We are seeing a rise in targeted advertising that is like spoof advertising that looks like real advertising, right? And advertising that you might expect, right? So keep picking on Patagonia. I think Patagonia is a wonderful company, by the way. So, you know, so let's say that you go and look at this Patagonia sweater and then you go on Instagram and you see an advertisement on Instagram for like a great deal on this Patagonia sweater, like an unbelievably great deal and you think like, well, that could happen, right? Like, you know, Instagram knows that I was looking at Patagonia cookies, whatever, right? The advertisement has followed me here, but in some cases, those advertisements are not legitimate and what you do is you click on the ad and on your mobile phone, it looks like you're being taken to a legitimate e-commerce site, but it's not, it's a spoof site. So I don't know the extent to which digital resignation plays into this kind of cyber crime, but I do think that those same digital infrastructures that have been created to support this kind of targeted advertising are being used to support that type of more nefarious activity. It's a good question. I'll try to make this fast. I want to get back to this idea of the free and open internet, which was never free, and Facebook had it from the beginning. So I appreciate the idea of regulation. I think it's a wonderful idea. The reality is there's a cost to running the internet if targeted advertising becomes not the way people make their money and if we ignore the fact that some people make too much money off of this and shareholders probably make too much, but they're not going to let go of their earnings. What's going to replace targeted advertising? What's the right thing that's going to be, that's going to make the internet still kind of free and open so that it's not just for the wealthy, but it's for everybody? What replaces that so that the internet can actually support itself? Yeah, it's a good question. And I think, I mean, I do think that contextual advertising has a role to play here. So it's not, I mean, I have colleagues who suggest kind of getting rid of advertising altogether. I do not go so far as to suggest kind of quite that radical solution. But I do think, you know, people would still pay for contextual advertising. And this, you know, you could use a similar, I didn't get into a lot of like the real-time bidding infrastructure that goes into how you see a particular ad and why you might see that ad, but, you know, you could still have a sort of system that says here's a story about, you know, here's a blog that's interested in outdoors things or here's a blog that's interested in sports and match advertisers that were interested in selling kind of related products with those sites. And that would still generate revenue. It just might not be quite as much revenue as targeted advertising. But again, the New York Times example shows us that contextual advertising might be as lucrative as targeted advertising. I'm not sure we know that. Some people have talked about various kind of taxes on companies that use this kind of targeted advertising, that there would be a kind of data dividend that would need to be paid to, for example, journalists, or not journalists directly, but to journalistic enterprises. That doesn't, again, solve necessarily the problem of targeted advertising, but it is a kind of way of transferring some of the money from like a Facebook to a New York Times or a Washington Post or something like that. And I think, you know, ultimately what this comes down to is do we think about the Internet as a public utility? Is the Internet a public utility? And again, this is a question that is in some ways in front of the Supreme Court. I don't think it's a question the Supreme Court is going to choose to answer. But it is a question in that case that I was talking about with respect to the Texas and Florida laws. One of the things that is at issue there is, is the Internet like the telephone company? Do we kind of have a right to be able to use this service? And do we have a right to be able to send whatever we want and say whatever we want on this service? So I think one of the kind of open questions is how do we conceptualize the Internet? Do we conceptualize it as a public utility, in which case there would be some sort of cost sharing, right? Government, private, much like AT&T or something like that. Or do we think of it as a private enterprise, largely I think as you correctly noted is largely what we have now. And I think I don't know which direction we are going to go with that. I appreciate the question. Thank you very much. So I was wondering your thoughts about patterns that you've seen between crime and some of this other targeted advertising. Are particular patterns that you see, how the usage is similar in those areas or different? So when, I think, I'm not sure I have a more specific answer than I gave before about the kind of scams. But I will say, I mean, I'm not sure it rises to the level of crime, but I will say that one of the things that really worries me are the ways that these systems can be used for the targeting of disinformation. And I really, again, not sure that it's a crime in the sense that we might kind of generally conceptualize crime, but I do worry about the ways that a system that is designed to identify people's interests, identify people's fears, identify people's identities, and those types of things can be exploited. And I think I have not seen necessarily that be used for like big C crime, but I think that there are nefarious activities that can make use of those systems. With that, I think we'll bring an end to it. Thank you so much, Nora. Thank you all very much.