 So thank you all for coming to our Public Knowledge's webinar. How do we move beyond consent models and privacy legislation? We are so happy to have all of you here today. And as a first for public knowledge, we will be having Senator Sherrod Brown do a short keynote about this very topic. Senator Brown is a lifelong Ohioan and has spent his career fighting for the dignity of work. The idea that hard work should pay off for everyone no matter who you are, where you live, or what kind of work you do. He has held nearly 500 roundtables across Ohio because he believes the best ideas don't come out of Washington. They come from conversations with Ohioans. Senator Brown has served as a ranking member on the Senate Banking, Housing and Urban Affairs Committee. He also serves on the Finance Committee, the Agriculture Committee, and is the longest serving Ohioan on the Veterans Affairs Committee. He has been a leader in Congress on rethinking privacy rights for the digital age, and recently put out a groundbreaking discussion draft in June. Sherrod was born and raised in Mansfield, Ohio where he earned his Eagle Scout Award and spent summers working on his family's farm. He is married to author and Pulitzer Prize winning columnist Connie Schultz. They live in Cleveland, Ohio with their rescue dogs, Franklin and Walter, drive jeeps made by union workers in Toledo and have three daughters, a son, a daughter-in-law, three son-in-laws, and seven grandchildren. Welcome Senator Brown. Thank you for being with us today. Thanks Mr. Cobb. Thanks Ms. Collins. And one other grandchild on the way, I might add, but that's not in the bio. But only recently announced, disclosed and announced, but thank you. Thanks for having me today and I mean it will be a short keynote as you suggest and I'm so appreciative of the work that you're all doing. It's usually a safe bet when I'm talking to a group about privacy that no one in the audience has read all the privacy agreements they've signed. This actually might be, this might actually be one group I'll talk to where that might not be true. So anyway, thank you for stepping up and doing what you're doing. Let me ask you a little differently. How many of you have ever read a privacy policy or terms of service agreement? Didn't like some of it, but accepted it anyway because you just needed access to Zoom or Gmail or some other critical internet service. I can see the nods and I know what the answer to that is because it would be, it would be everyone's answer. You always would, you would click accept because you didn't feel like you had any other choice and that, that's really the fundamental issue we face. Your privacy scholars, you know more about this than everybody. You probably know how we're pretty high powered lawyers. You still sign these contracts. And what it looks like for a regular family, a student who needs an app to finish an assignment or an older senator in Cleveland who needs to go to Zoom to see his grandchildren. So we know how all that works. Like so much in our economy, it comes back to power. We can't really call it consent. I guess that's the word we use when huge corporations use their power to coerce us into signing away our basic rights like privacy. So thank you for being the leaders in our attempts to fight back in so many ways. I don't need to tell you that the entire consent and disclosure model that's built around private privacy is not really about protecting us. It's about protecting those corporations. That's what has to change and change soon. Because it's not just the companies that are powerful. So are the technologies that are developed, that they are developing. Sometimes these big tech companies aren't even aware of all the ways these tools can be used until they've released them into the wild, shall we say? One example is a fitness app called Strava. It keeps track of your workouts, running, writing your bike, things like that. A couple years ago, Strava used people's anonymous data to put together a bunch of maps from all over the world showing the most popular running routes. And releasing the maps to the public sounds fairly harmless maybe. Turns out that a lot of American soldiers use Strava and a lot of them run laps around the perimeter of their military bases. So when Strava released supposedly safe anonymous data, they also accidentally released near perfect outlines of several secret foreign US military installations. Maybe that's an extreme example, but that's the kind of thing. I mean, look at the havoc that Facebook's unleashed in our elections and our newest ecosystem. Companies that also don't always control how the data they collect can eventually be used. Often the information you give up to online stores and social media sites can be harvested by the government too. There's no checkbox to opt out of that. Recently a company called Clearview AI trained its facial recognition technology by scouring the internet for pictures unloaded to social media sites and then sold that technology to law enforcement around the country. We can see where that leads. Facial recognition has repeatedly been used by police in Minneapolis, in other places to target Black Lives Matter protesters exercising First Amendment rights. The other examples can be even scarier. Then there are the ways corporations have legally abused people's data. Look at the list of data breaches just from the last 30 days. One million accounts hacked an education company called One Class. Two billion people had their information breached in a company that tracks internet activity. Thousands of accounts were compromised at a banking website called Dave.com. Nobody consents to being hacked. Nobody consents to having their identity stolen. Of course, hacking is already illegal. So the issue really isn't how the company intends to use the data, it's that it was collected in the first place. Rather than wait around while this technology changes our world, we need to lay out a vision for the kind of society we want to live in before Mark Zuckerberg and other CEOs that no one elected makes that decision for us. That's why I put together a different approach. We want to see a world where you can have confidence that most of your data is protected from collection entirely. And if you do authorize data to be used by a company for some reason, it shouldn't be collected and then handled with other information or sold to other people or used for any other purposes. That's what real power over data looks like. My plan takes the burden off consumers and puts the burden where it belongs on big tech. It would drastically scale back the permitted uses by companies of your personal data, banning them from collecting any data that isn't strictly necessary, isn't strictly necessary to provide you with the service you ask for. For example, signing up for a credit card online won't give the bank the right to use your data for anything else, not marketing. It's certainly not to use that data to sign you up for five accounts you didn't ask for, like Wells Fargo, so infamously did and continues to still do things like that. It's not only the specific companies you signed away to your data to that profit off it, they obviously then sell it to other companies you've never heard of without your knowledge. That's why the plan creates one central agency to monitor companies that collect data and gives Americans powerful legal tools to hold those companies accountable if they abuse it. It also bans facial recognition outright. It's an immature and dangerous technology that we know often has racial discrimination baked into it. It has huge potential for abuse. We know it's already resolved in the wrongful arrest of a man in Michigan. If we don't stop this now, we know that's gonna happen time after time after time. We can bet that, like always, it will be black and brown Americans who are most often going to be the victims. The right to privacy, the right to peacefully protest, isn't something we should lose simply by clicking, I agree. Under my plan, you could rest easier knowing that regardless of the 4,000 word privacy policy shoved in your face, it's illegal. It would be illegal for your data to be bought and sold or sent to law enforcement. As we talk about the plan and work to get more people on board with rethinking this privacy framework, you might hear arguments from tech companies that this proposal would stifle innovation and interrupt their business model. That's what they always say. My answer to that is, yes, if your business model's built on exploiting people, we might interrupt it. We wanna stop it. Silicon Valley should respond by doing what it always has done best, that's by innovating. The reason big tech hasn't come up with a business model that doesn't rely on spying on people isn't because they can't do it, it's because they simply haven't tried. Americans have been getting free media in exchange for listening to or reading advertisements for decades. It never required an invasion of privacy. I'm confident if we raise the bar, smart engineers and others will figure it out, they'll meet the challenge. They have before, and let me close with one story about how this works, not in the field of privacy but somewhere else. I was in high school and the national news said the Cuyahoga River caught on fire in Cleveland, it wasn't really the Cuyahoga River, it was the bridge over the Cuyahoga River as a train crossed and sparks flew, the bridge caught on fire and it looked like it was actually the Cuyahoga River. It's 1969, at the time there were almost no national environmental standards. Cities were covered in small waters, ways were so polluted they burned. That fire made national headlines. Americans began to wake up in 1970. Congress responded by passing the Clean Air Act. Shortly after that, the EPA set a bold goal, drastically reducing vehicle emissions. Auto manufacturers, if you're almost nobody on this screen I see is old enough to remember this, but none of you on the screen are old enough. The auto companies of course, as you know, went berserk. It was impossible, they said they couldn't meet these standards. Automobile assembly lines would have to be shut down. The few cars that could be manufactured to meet the tough new standards would be too expensive for most American families and they wouldn't want to buy them on and on and on, that's what you always hear. But a few years later, the industry invented and mass produced the catalytic converter that we all have in our cars today, reducing smog and other air pollution like lead across the country. American industries, American industry, the auto industry and the supply chain were innovative enough in the 70s to tackle big social problems. They're just as innovative today. We just need to challenge them to do it. That's why our work, that's why your work is so important. We just need to challenge them to do it. If we come together and change the law, you can bet that big tech will figure out still how to make money, but to make money, you know, that innovate and to make money in a way that will protect our privacy. Keep advocating, keep challenging the status quo, keep talking about these big issues. This isn't going to get done on its own. To take on big tech, we're going to have to all work together. And I'll close with a story from the light of his death just last week, John Lewis. I led, I with John and another senator and I led, I mean, John led. I was kind of one of the titular leaders of the 50th anniversary going to walk across the Selma Bridge, the Edmund Pettus Bridge in Selma 19, in 2015. And John rode in the airplane, there were 100 members of Congress on the plane. And John came over to where Connie and I were sitting and he said, he was talking, he was telling me, this was March of 2015, the spring before 10 or 11 months earlier, he had been the commencement speaker, believe it or not, at the University of Mississippi, an old mess, a place John Lewis, when he was a kid, never could have even enrolled, let alone been the commencement speaker. And he, he told a story about his growing up. And he said, when I was, he was in, and he was born in 1950, in the mid 50s, he grew up outside a little town called Troy, grew up on a chicken farm outside a tiny little town, Troy, Alabama. And his parents, and he went into town one day into the bus station. And his parents, he saw the signs that said, colored only, colored only, drinking found, white only, drinking found, colored only, waiting room, whites only, waiting room. And he said to his parents, and mom, dad, what does that, what does that mean? John was a young teenager, colored only, white only, his parents and grandparents said, John, don't, don't ask questions, don't make trouble. And then he turned to his grandparents and grandma, grandpa, what puts that mean, don't, don't, colored only, white only, his grandparents said, John, don't, don't ask questions, don't make trouble. John goes on at Ole Miss to tell the story. He says, when I was 17, I met Rosa Parks. When I was 18, I met Martin Luther King. I told them that story and they said, no, John, respect your parents and grandparents, but they're wrong. He said, ask questions, make trouble, make good, necessary trouble. So I exhort all of you in the spirit of John Lewis and the spirit of justice in getting this right and preserving people's privacy to make trouble and to make good, necessary trouble. So Ms. Collins, thank you, and I appreciate the chance to talk to people that know this issue better than anybody in the country and the people that are gonna help us change the world. Thanks so much. Thank you so much for being with us. I really appreciate it. All right, good luck with your panel and I enjoyed it and enjoy the rest of the day, everybody and be safe. All right, thank you so much. All right, see you. Thanks. So with that keynote, Senator Brown has given us quite a lot to think about, but before we dive into some of these nitty-gritty policy issues, I want to first tackle a really basic question. What is consent? How do we define it? So I'm going to turn to you, Joseph. And before we get started, actually I should introduce my panelists first. So if you all don't mind waving or saying hello as I introduce you, I would really appreciate it. So who we have on our panel today is Yosef Gattachu, Director for Media and Democracy Program at Common Cause. We also have Natalie Marischal, Senior Policy Analyst at Ranking Digital Rights. Joining us is Stephanie Nguyen, Researcher at Consumer Reports. And finally, Joseph Tiro, Professor at the University of Pennsylvania. So with introductions out of the way, Joseph, what is consent? How do we define it? What does it mean when we talk about consent? Oh, you're muted. Let me hold on one second. Let me see if we can get this fixed. Okay, thank you. I'm still reeling from Randy Newman. So long told me the Cuyahoga River was actually burning. So now I changed my whole view on life here. Consent is a really interesting issue. I think the common notion of consent and the one that probably attorneys would argue holds for privacy policy and other kinds of end user consent is that a person will agree to the stipulations that the company makes with the idea that that person understands what's going on, okay? Now we could get into a lot of interesting discussions about what that really means. There's been a number of interesting law review articles recently about whether in some cases it's even possible to do consent. Recently there was a piece about by Woody Herzog and Evan Selinger about whether facial recognition even warrants the idea of consent. And I've done some looking at issues of medical consent and trying to relate that to work I've done on voice recognition and voice profiling. So consent sounds like an easy term but it's a really complicated term. And if people are interested later I can talk a bit about how all this relates to resignation because I would argue that in the end consent and the requirement for consent leads Americans to be resigned about the whole process. So we will get to that but I wanna first hop to Stephanie to talk about how consent is sort of put in practice in our daily life. What does consent look like when we're talking about different apps or different IoT devices? How do we interact with these consent mechanisms? What does it look like right now? And to help explain this or make it a little bit easier I'm going to drop in the chat for everyone a link to some of the some visuals that she's talking about. So if you're a visual learner you should be able to see some of this and get a better sense of what she's speaking about. Hi everyone, I think this is a great question to kick off this conversation. And if we go to slide one actually what I'm trying to lay out here is typically we're used to seeing and thinking about a pop up window filled with text about how the company may use your data. And there's actually more different types in terms of consent patterns that I'd like to break down. So first I'd like to talk about static presentation one way to look at consent is like is it a pop up window or is it are we looking at different iconography that sort of mimics what we're talking about into the concepts? Is there a certain imagery that provokes a sense of like what we're trying to learn as we gain consent? The second piece is language. So a lot of scholars and researchers have talked and studied consent in terms of is there negation? Is there passive voice? What is the reading level? And then the third concept is the actual content. So in the image here, you're looking at things like are there actual rights to opt out or delete or are there sort of data portability kind of options here? And what we're seeing as we lay out this consent here is that with GDPR there's actually a lot of shifts in what we see consent and what it's morphing into. You've got sort of dynamic presentation of like in Facebook, you've got ongoing reminders to check your settings. You've got a necessity. So with GDPR, if you go to the Guardian, you'll see that there's a minimum cookie kind of collection that they will ask you to acknowledge all the way up to communications and website use and advertising. And then the third thing we're seeing that sort of shifting is products that are testing no ad businesses. As Senator Brown mentioned when he was opening this conversation, so you have a lot of ad blockers or new search browsers or even email that doesn't include advertising as part of the business model. But regardless of the form and factor and what you're seeing a lot of studies and I can't speak for all of them but a lot of comparative studies participants were not able to reliably understand company privacy practices whether it's a walkthrough a lot of people scroll through really quickly and move through information. Whether it's even in the example that you're seeing here with Pinterest what they've done is like simplify a lot of concepts into plain language and even then people don't go out of their way to look at the document to read it. So this is all to show like there are a lot of ways that companies are being innovative with designing consent. With that said, it's shifting people are trying to be more dynamic about it but there hasn't been strong research that has shown that consent is actually obtainable. So I think this conversation is an ongoing one. Thank you Stephanie. So now that we have a definition and now that we're starting to see the picture of what it looks like is consent obtainable? I know Stephanie and Joseph have alluded to this but I want to start with Natalie and then move on to Yosef. Do you think consent is obtainable? Do you think there's really an ability to say yes or no in the current climate that we have in the current like internet or data ecosystem that people are not living in right now? So let's start with you Natalie. Thanks Sarah and thanks for having me. In the current system, no, it's not possible. In the overwhelming majority of cases especially when we're talking with tech giants that are currently being grilled across town from me. I'm sitting in my home in DC right now. You don't have a choice or rather the choice that you have is to not participate in the internet not participate in society especially now that many of us are quarantining and maintaining social distance as much as we're able to for public health reasons. Opting out is not an option, right? Kashmir Hill who's now with the New York Times did a great series of articles for Gizmodo a few years ago where first one at a time and then all at once she tried to go an entire week without using any services that were provided or facilitated by any of the big five tech companies and which she found some of them were easier to break up with so to speak than others but certainly when it came to Google, Facebook and Amazon which let's remember Amazon Web Services hosts a large chunk of the internet. It was completely impossible for her to do her job as a reporter or even to do much of her daily life and I think that's the unrealistic choice that we're being presented with, right? Is either you accept that you're going to be surveilled and have all of this data about your body, your life, what you do, even what you think data mine for the profit of others or you can just not participate in society at all and that's really not a choice as I think the senator really made clear. You safe? Thanks Sarah. Yeah, I would echo a lot of what Nalia said just to ask the definition of consent when you're looking at this from a broader democracy perspective, a consumer empowerment perspective I think the idea of consent is that you the user should have knowledge of what data is being collected and a level of control and how that data is being processed, how it's being shared, whether the harms or implications with giving away your data can consent be withdrawn easily, things like that are part of the broader concept of consent that gives individuals the ultimate power and control. Are we there yet? No, nowhere near there yet. And I think part of that problem is that platforms or companies in general are designing their products in a way where while they might be putting consent within the products, they're kind of pushing users to give up more personal information. They're designing their products in a way where sharing is more of the norm than not sharing. Disclosing your data or your personal information is encouraged in some ways or it's the default. And if I am a savvy internet user or a savvy Facebook, Twitter user, am I gonna go into the settings and change the defaults or go into ad settings or privacy settings to figure out who I want to share certain things with who I don't wanna share certain things with, I'm usually not gonna do that. I don't do that and I'm working on these issues like everyone else. So I think the idea of consent from a consumer or an individual perspective is one that is not achievable at the moment. I agree with what Senator Brown had said is that we have to put more of the burden on the platforms or the companies to not give up not to let as much information or put more restrictions. And I know we're gonna talk about that later. At the moment, it's not achievable in the current period. Joseph, you have your hand up and I'm sure you have some thoughts about this. Well, I just wanted to extend what Joseph said and also something that Stephanie was pointing out. I think that it's beyond the complexity of the language. I would even argue that in many cases it's a corporate strategy to confuse people. And if you look at what Stephanie was showing us, Stephanie, could you go back to the Pinterest chart that you had? There was a fascinating dynamic there that I think bears emphasizing. What Pinterest is doing is saying, okay, here's the hard stuff and they put that in regular font and black font. And then they say, here's the stuff that you really ought to listen to. And it says more simply put, all right? This more simply put is not a bare reflection of the paragraph that's above that. It basically is alighting what the paragraph above that is saying, okay? The paragraph above that talks about identifying you. It talks about how they're going to follow you and promote content to you. But the paragraph in blue that's supposed to stick out at you says, you know, we help discover and do what you love. It's customized to you, okay? That change in language is designed to basically obscure and move people in a different direction. So I think we have to sometimes just call it out for what it is. A corporate strategy to sometimes obfuscate. Thank you. I'll also just add on that, I mean, in general, to sort of extend what Joseph is saying, like in order to consent to something, we actually need to know what is acceptable or permissible to the people who are actually using that product or service. And so when we think about consent, you know, in order to consent, what do people expect will happen? And is that actually true? There's a lot of ongoing work to figure out, you know, this very highly contextual and very diverse sense of like, is, you know, third party advertising acceptable in this product, but not that product? Like where are the delineations here? And so that is sort of a really complex field that I think people are really trying to make moves on and understand more from the human perspective. Thank you. And I just want to remind our audience that we have a Q and A button. So if you have questions, please put them in and I will integrate them. Believe me, I have tons of my own, so don't worry. But I wanted to make sure everyone here knew that they could contribute to the conversation as well. So this is a really good starting point, but I want to dive into a couple more of the nitty gritty questions. So the first thing is kind of a practical one. We've talked about a little bit or alighted to it is like voice recognition systems or IoT devices or devices that aren't necessarily apps or websites where you can like easily click a button where a consent window would come up in a sort of natural progression. So Stephanie, can you talk about some of the practical concerns for devices that aren't necessarily web-based? Sure, and I'll just paint a picture here, especially when we're talking about internet of things. We're talking about widespread network access, miniaturization, we're talking about cheap sensors and inexpensive prototyping and a lot of different types of devices and products that have came from that. Think everything from Hello Barbie to the Yuffie robotic vacuum to the Nest thermostats in your homes. And so the challenges of this, as you can imagine, is that all of these different hardware devices that don't necessarily have a screen that can hold a lot of text like we see on our desktop laptop computers. The challenge here is going from online to what is typically offline. So the expectations and norms of humans need to shift when you're using something, whether it's a kid's toy to a home device of how that thing will actually collect and manage your data. The second is a lot of these things are now also with you, with your bodies and also in your homes in places that are typically seen as intimate and very private spaces in your home. So there's more types of data that can be collected with these things, especially if they're attached to your body, I think a Fitbit. And the third thing I'll mention here is in terms of choice and meaningful consent with internet of things, it's very similar to, I would say, an experience on a browser and on a mobile app in terms of you've said it and forget it. There's a lot of lengthy policies upfront with a binary option. But typically with these devices, they usually require pairing with a mobile device or some account with some sort of onboarding with privacy consent. So rarely is privacy information actually in the physical packaging, like you're not opening up a very long CBS style receipt of like how this thing is going to work. And the issues are even thornier for children especially for populations that can't consent to it themselves and require a pairing of a parent to help them do so. So I'm just sort of painting a picture of like the different types of challenges that we're seeing with new products and services that are hitting the market. Thank you for that. I really appreciate it. And so there's obviously like a practical concern as more and more of our devices become enabled as data collectors, if you will. But there's also some other more policy-focused questions. So I'm going to go to Yosef first, but I think again, this is another question that our entire panel may have opinions about is you alluded in your first statement about this sort of collection of rights that we need to have some sort of control over the data. We need to be able to rescind. Can you talk more about this sort of collection of rights and how that relates to consent and choice? Yeah, I think this is something that Senator Brown had alluded to, but the idea of limiting what is being collected on us or putting the burden back on the companies to not collect everything. I mean, there's this principle of data minimization that is getting a lot of traction, essentially saying that companies shouldn't collect data that they don't need to provide the service. The obvious example is always if I have a flashlight app that I downloaded, I don't want the flashlight app to collect my location information. I think that gets a lot trickier when you're talking about a platform such as Facebook or Twitter or Google, which collects a plethora of information and then you start to get confused about, okay, do I need to provide them every single detail about me to get the service? They're going to make certain claims as saying we need location for XYZ or we're going to need financial information to provide this transaction, things like that. And so the question becomes for larger platforms, how do you limit data? I think there are ways to reduce the amount of data that companies collect through certain standards. Obviously sensitive information should get treated with height and standards. So financial data, health data, children's data should be a more context-specific way of asking for a consent or not collecting it unless absolutely necessary. I think there are clear civil rights protections that come into play that go beyond consent or data immunization. Companies shouldn't be using data in ways that discriminate against people of color and other marginalized communities. And this is part of what I was talking about earlier was how companies are pushing towards collecting more data and encouraging shared data. The way they're designed is that they're trying to sell the state of the advertising or get ads on their platforms. And a lot of times this turns into manipulative or discriminatory uses of the products. Ad targeting is a huge issue. That's how their business models operate. And for a lot of the ads we see at Rea Common Cause, we track ads with election disinformation and voter suppression. Oftentimes they're targeted towards people of color and other marginalized communities because these companies have collected a lot of data on groups, not necessarily race data, but other data that is implicit in uncovering who these individuals are. And oftentimes these ads are harmful in a way that discourage groups from voting. And they have huge ramifications. So to sum this up, data minimization is a key role here but there's also other values in terms of how we look at civil rights protections and how we prohibit certain uses of data regardless of how it's collected and what we do with it. Natalie, do you wanna add into this? Yeah, absolutely. I think everything you just have said is precisely on point here. One thing I wanna really emphasize is that the purpose of all this data collection at every level is discrimination in some way, right? Sometimes it's discrimination that makes sense and that does not raise any particular civil or human rights issues, right? So for example, knowing that I'm physically located in the United States and that my computer and browser language are set to English means that when I go to a webpage of an international news organization, I'm going to get things in English and not in some language I don't speak, right? So that's fine, right? But then as you was saying, when it comes to ads, when it comes to the optimization of unpaid content of the content posted by other users, when it comes to the tailoring of insurance products, when it comes to what opportunities you get for housing, for jobs, for education, for financial services, that's where you get into forms of digital redlining that are particularly pernicious and severe for communities of color and other marginalized groups. But in the end, reduces the options and the opportunities available to all of us in a lot of different ways. And this impacts not only first of all our privacy, right? But in the right to non-discrimination and to being treated fairly, but it also ends up impacting our right to health, right? When you get different insurance rates or different access to medical care, access to all kinds of really key goods like education, financial services, housing, jobs, et cetera, that are key to living a life, right? But also freedom of thought, right? When your social media fees are targeted in such a way that you're exposed to only certain viewpoints and in certain ways, that ends up influencing the work, how you perceive the world and how you think about the world and your access to information as well. So one thing that I love everyone in the audience to come away with is when you hear the phrase personalization or customization, think discrimination. And again, not every case of that is going to be something that's harmful to you or harmful to society, but in many cases it is. And so that's why I agree with Yosip and with Senator Brown and I suspect with Joe and Stephanie as well. I just haven't said it explicitly today that I heard with my ears that, we need to go beyond this consent model that's really a coercion model and look clearly at what it is that needs to be prohibited in order to protect people. And I loved the analogy that the Senator made to the auto industry there. Figure out a way to enforce it, right? Because you can't just make laws and expect people to follow them. I think if there's, we've learned a lot of things the past three and a half years, but that's definitely one thing that we have learned. And we need to put mechanisms and structures in place to encourage and help the tech industry to reinvent itself in a way that continues to support innovation and innovative products that many of us enjoy using while respecting privacy and all of these other important rights that I mentioned. Yes, Yosip, I'll let you bookend this since you seem to have something else you want to add. Just a quick point. So in Senator Brown's bill, there's actually language in there that prohibits companies from using data to discriminate against people in terms of housing education employment. The interesting thing here is that this is already against the law under civil rights laws. But what we're seeing is that this is an ongoing problem of platforms even though it violates the law. And this goes back to what Natalie was saying, is that when you collect data, when you personalize the data, there's inherent discrimination in there that we need to figure out a way to remove and limit. And obviously if it's against the law, we're still doing it and it remains a problem. Yosip, I see we've like latched onto something that's a real disadvantage to discrimination, consent and data. Push that a bit farther. I don't even think there's anything called nonsensitive data. In other words, I think that that's part of the issue here and why consent is so complicated. Most Americans who have access to a frequent shopper card for the supermarket use it like 90%. And if collecting everything you purchase, including over the counter drugs, which is totally legal, you can take something that seems very benign about a person's life and running it through machine learning and other kinds of artificial intelligence. You can use what seem to be benign pieces of information to discover things about people that can be used in a prejuditionally discriminatory way. So the note, for example, whether the person needs red meat or not in concert with other kinds of foods and whether that's gonna lead to certain kinds of health problems, statistically speaking. And then what does that mean? So the notion that there are benign data and sensitive data sounds great, but as we move further into an AI world, the distinction doesn't exist. So we have a question from our audience that I wanna start with Stephanie. But again, I think a lot of people will have different ways of answering this that we're talking about how data can be collected both online and offline. And this is especially true with IoT devices. And if you think of something like the Amazon's Ring, which is a video device that uploads recording and you can share it with, we've traditionally in the US at least, then pretty, there hasn't been requirements to get permission for photography or video recording and what's considered public spaces or spaces that you yourself own. Like in your house, you can put up your own surveillance cameras even if other people are coming through that. With the rise of machine learning and what we can use that data for and what we can do with it, how does this sort of norm around photography and video recording have to change or where are these tensions living? And Stephanie, I wanna start with you because you've done these research on norms and what consumers expect, but I feel like the rest of the panel may also have some interesting insight. I would say the example that best sort of illustrates this type of topic is looking at Amazon Ring and even just home kind of security camera type devices. You know, internally in the home, like we've seen people hack their devices to speak to children, to have them, go ahead and break the TV. And I think like that's sort of one lens of this, but then you think about the cameras that are pointed outward and how this data can be aggregated, especially in like neighborhood type environments where you're on a social network, next door with your neighbors and you start to crowdsource information about your entire street. And so it's a real slippery slope when you think about these connected devices and the aggregate information that when compiled together can give a surveillance lens of who are we following, who are we looking at, who are we tracking and then who eventually is harmed, which often are black and brown communities. And I think this is not new by any means. This is a topic that has been discussed for years and years with sort of the creation of cameras outside in parking lots and in street stores. So I think in general, I do think we need to be thinking about how norms and how laws change when we're using data for sort of collection of our family, friends and neighbors outside of our home. Does anyone else wanna add on to that? I feel like Yosef, you were nodding along in pretty vigorous agreement with her assessment. Yeah, I agree with her. I don't have anything else to add there. So we do have to change this whole notion in society that the outdoor is free game. And it's an interesting thing that has been with us for so long. I think there's much to be said for what Stephanie said about rethinking what does it mean to be outdoors? Do individuals have the right to their own privacy as they walk through the street? That's right. At the same time, this question also raises the issue of journalism and press freedom, right? Because we're also seeing a lot of instances of people using handheld cameras or their cell phones to record instances of police violence and other kinds of abuse of power by officials. And I think that's a positive for democracy, for civil and human rights. And so I think as we rethink what these norms are around privacy in public spaces, I think it's gonna be really tricky to, certainly to come up with any kind of a legal solution that accounts for both private individuals right to privacy as well as private individuals right to, and both citizen journalists and you know, quote unquote professional journalists, right and need to document abuses of power and other events of public interest as they occur. So it's really a very thorny question. So I wanna go back a little bit to something that Senator Brown said, but we've also been alluding to this integration of consent into exploitative business models. And I don't think anyone's laid this out kind of step by step, but I know ranking digital rights has talked about this integration a lot. So Natalie, do you wanna talk about how consent is used to sort of leverage exploitive data practices or how there's an interrelationship between the two? Absolutely, so I'm gonna talk specifically about social media platforms. So things like YouTube, Facebook, Instagram, Twitter and a number of others, but keep in mind that the same basic principles across to apply to many sectors of our economy and society today, right? But I'll focus on social media to keep things concrete. So these are all businesses that have two different sets of people that they interact with. They have their users, which is us, right? And they have their clients, which is mostly advertisers, but in some cases also law enforcement and other government agencies who buy access to data streams and use them for all kinds of purposes that may or may not be compatible with certainly my understanding of legality in the public interest. And then they also use that data to create these really powerful AI models that they can then apply in a bunch of different other streams, right? So I really like the way that Shoshana Zuboff puts it. It's not that we're the product, it's that we're the abandoned carcass from which all the value has been extracted, right? And that's a pretty gruesome way to think about it. And so the value from their perspective is from having us engage with their platforms and read stuff, comment, write posts, upload things, engage in all kinds of behaviors that they're able to track both while we're using their platforms, but also increasingly on all parts of the internet because they have these things called beacons or super pixels that follow you around the internet as you read different articles or do different things. And increasingly they're also amassing information and data about your offline behavior by buying up your credit card records, by accessing DMV records. There's databases that are available for purchase commercially of people's travel around the US using cameras, recognizing your license plate, for example, and correlating that to DMV data to see who you are. Expand that to voting, expand that to healthcare, expand that to everything. It can create a really, really invasive profile of who you are and what you do and what you might do in the future. And they use that to extract money from or charge, really, I don't mean to sound this part of the transaction seeing that horrible. They charge their advertisers for access to your eyeballs and access to your attention. And the value proposition is that because they have so much data about you, they will be able to pinpoint the exact right people for any given advertiser to reach with the exact right message at the exact right moment when you're most likely to act on it. So really what we're talking about here is an entire system of surveillance, yes, but surveillance in service of behavior modification, because that's the entire point of advertising, right? The point of advertising is to get people to do something, to modify their purchasing behavior, their voting behavior, their behavior in a lot of different realms, right? And that's always been true of advertising, but again, until the past 20 years, it was not powered by this amount of surveillance and personal information about people, much less the ability to target them exactly at the moment where they're most likely to make a decision related to the ad that's in front of them. And so that is inherently a coerce of business model. So thank you, thank you for walking us through that. So I want to dive into that a little bit further and just again, I'm gonna start with Joseph, but others can chime in. What does consent do for that business model? Does it give consumers the feeling that they have some control? Like is consent doing any work right now? We have found in the national surveys that we've done that people know they're being tracked. They don't quite understand what data mining is about. They kind of have some basic idea, but most of the questions we've asked people around these topics, they get wrong. And consistently, I think we've done eight national surveys where we asked the question, true or false in one way or another. When a website has a privacy policy, it means the site will not share your information with other companies without your permission. Most Americans think that's true, it's really false. And when I mean most, sometimes it's been 70%, it's usually 58, 60 some odd percent. So there's a real lack of understanding there, perhaps again, to some extent encouraged by the language that I pointed out in Pinterest and other places that does that kind of work. I also wanna say very briefly, that just because advertisers think they should do this doesn't mean that they're really doing anything accurately. I mean, there are many people even within the advertising industry who think that the entire business model is broken. The amount of click fraud that has gone, that goes on within advertising online, some people have said it's near 23% of the actual amount of advertising that gets spun out globally. People lie all the time and companies try to make sense of all of this. There's a lot of really weird stuff happening and within that companies try to navigate, sometimes threatening the various publishers that they're not gonna put money into them anymore, if they can't get their act straight. Nevertheless, people are being defined and discriminated against, oftentimes based on data that are probably cock-eyed. So we shouldn't assume that this is all accurate, even if it were accurate, it would be problematical. But I'm saying that it's not necessarily any accurate than the old fashioned television business. So that's exactly right. I mean, and to like bring those two things together, if the model worked as advertised to advertisers themselves, it would be exploitative and harmful. On top of that, it's not even delivering the value that is being promised to advertisers. What it's doing instead is allowing an entire industry of middlemen or middle actors to extract rent by acting as intermediaries between advertisers and publishers, while at the same time, imposing all kinds of harms on the general population. And that's something that's just broken and that needs to be fixed. Thank you for that. So as a privacy professional, I noticed about every six months or so, there's a new privacy-focused du jour. Right now, it's really around COVID and what- Sarah, can I make a quick point? Oh yeah, of course. I'm sorry I missed that. No worries. Yeah, I just wanted to pick up where Natalie left off with these middlemen. I think there was just a serious lack of transparency in the ad marketing system for digital advertising because there are middlemen, there are ad servers and software and automated systems all playing together where it's very hard to even understand how your data being sorted through that entire process. I know, for example, Google or Facebook, they put out a ping or request on their ad servers saying, hey, we have space for an ad for this amount of data and there are middlemen that kind of drag through that process. But it's unclear on how that process is actually unfolding on a real-time basis and why you got that specific amount of advertising. I know that some of these platforms have tools or options that say, explain how you got the ad. I don't think that's enough. And from a consent mechanism, I don't think anyone's consenting for middlemen or ad servers or software to send their data to three or four other groups. Thank you for that, Yosef. Sorry for missing your hand being raised. So I wanna go to sort of the new lens which we're looking at privacy right now which is very much a COVID lens. As I'd like to talk about it, that is something I've been working on pretty extensively here at Public Knowledge. And the big concern is sort of around COVID tracking apps. So Stephanie, I'd like to start with you. How have COVID apps learned from all of this privacy sermon drawing that has been going on? Are they more privacy protective, less privacy protective sort of? How is this sort of pseudo health app because they're not quite collecting health data, although sometimes they are interacting with privacy rules, privacy legislation and how are they collecting our data? What's going on there basically? So again, just to sort of give more context on this, in general, you've got a global pandemic with rising numbers of deaths around the world and a lot in tech emerges as, here's a potential way that we can try and mitigate this concern. And I think everyone sort of, there's a fear of like, there's an app for that type of mentality is this thing actually going to make a difference? And I think for those who are studying this and we're looking at it, there's the fact that there's a collection of your personal health data, your location data and whether or not you have a virus that is really stigmatizing to not only your family but the people around you all boiled together makes a very sort of stressful situation where contact tracing apps have risen into the media spotlight as, hey, is this going to be really privacy protecting? Especially when these apps are being created by local governments for countries around the world. And so in a very short period of time, I'm talking like three months in processes that typically take years for these organizations to kind of roll out. You've seen a lot of different applications popping up around the world. So in general, digital contact tracing apps have been forced to rethink the classic tiny terms of service pop up window. And the two things that we're showing on the screen here are two interesting ways through our research and analysis that we've seen. One is onboarding of actually walking people through a very technical process of what does it look like to use Bluetooth contact tracing that's essentially invisible to a human eye. And so a lot of countries in particular here we're showing Italy and Switzerland have really thought about like, are there ways that we can like walk people through these applications to explain a concept before asking for your contact information or your Bluetooth permissions. The second thing that we're looking at that is pretty starkly different than what we typically see in applications and privacy settings is more visible privacy settings. So here are four different examples just here with North Dakota, Malaysia, France and Alberta, Canada where apps usually tuck settings away in the top right corner where you have to click several times to actually get to it. Here, some digital contact tracing apps have bumped this up to the home page tab to actually show you and allow users to be aware of like is location tracking on? Are permissions enabled? Yes, no. Is Bluetooth on? Yes, no. And so we're seeing that I can't say that this is more privacy protecting but it is certainly more transparent to the user like in your face about whether or not this app is on is it collecting information? The other thing we're seeing is just a lot of countries around the world have made an effort to kind of showcase their governance with the app. How did they create it? Who did they create it with? Is there some sort of advisory committee that they've convened with people and citizens around them? I think in general, like what this is teaching us is not that it's necessarily more or less privacy protecting, but I would say if apps can be more transparent as they've shown in these apps, then we should also use this opportunity to improve privacy protections for other apps that we have very similar features of today. We have apps that show proximity based alerts, tracking movement and personal information. This stuff is not new. And so like through kind of COVID, it's really pushing kind of our understanding how can we improve app visibility and accountability so that it isn't as just covert and invasive to our everyday lives. Yozef, did you wanna add on to that? Sure, yeah. I think this concept of contact tracing apps where they're relying on people to give up personal information to be effective is complicated. There are obvious privacy concerns when it comes to using Bluetooth technology, location technology to figure out who has the virus and track and spread the virus. From what I've seen, there seems to be two models of how this data is being collected. One is the centralized model where one's for Bluetooth Pings, another Bluetooth connection, it sends that data to a centralized database where it collects all the various data points to figure out who has the virus. There are concerns there about, okay, what does it mean when all of this data who has the virus can where it is, is centralized in one location with one set of actors versus the other model where it's more decentralized where the data is in one place and it maintains on your phone. I think the question there is in both situations, you're tasked with figuring out how to move beyond the consent model. So here are situations where you have to employ non-discrimination principles, making sure that this data isn't used to disproportionately impact people of color, which as we know, surveillance usually does, making sure it's used specifically for public health reasons and not for alternative uses, specifically with location data that could be used for all types of marketing purposes, making sure it's transparent that this data isn't in other hands or in marketing hands for various purposes. So the other thing I'll mention is that from what I've seen to actually be effective, there needs to be a significant amount of people who actually use these apps well beyond 20, 30, 40%. I think it has to be over 50. And when you balance that with actual voluntariness principles of making sure that people choose to use the app and are not forced to, I think there's a concern about how active they can be. Thank you for that. I really appreciate this. So we're certainly get this question in the Q&A and chat and I figure we should move to it now. We've been picking apart consent for about 45 minutes. So what's better? What do we need to move forward? What do we need to choose to do to get away from this model? What would the correct regulatory or legislative framework look like? I'm gonna start, this is a question for all of you. I'll start with Joseph, but please raise your hands as you're ready to ask and we'll go through it and sort of talk about where else we could, what else we can do. That's a very complicated question. And I, the answers have to do with what you think, I think philosophically, companies deserve to know about us. I think, I was at the Facebook meeting one time and I think I startled them by saying that they probably could make as much money if they use just contextual advertising. Just money based on the initial notion that Google people had here is an ad based upon what you're reading, for example. Maybe there should be a little bit of room for looking at a person's background within the very specific context that that person wants to give the data of what they're doing, and then it has to be erased, okay? I think the notion of being able to aggregate and collect data across platforms is highly problematical. And by the way, there's a fancy now to make a distinction between first party and third party data is if somehow third party data are terrible and first party data are fine. And I'd like to problematize that. I'm not so sure the first party data is so great either in terms of what companies can do with that and how they're working through it. So- For our audience who may not know, can you just sort of like quickly define what you mean by first party versus- Yeah, first party data are data, say you have newspapers in New York Times is collecting data about you. Every time you go there, it actually even made by data about you. That's a sort of a separate story, but it actually asks you perhaps for data about yourself. So it has an aggregate notion of what you're doing on the site, who you are and like that. Third party data is data that companies collect about people from a whole variety of places, offline and online. And then they may use it for a variety of purposes including to sell it to first parties or advertisers or people in programmatic advertising situations, platforms which then try to buy people as they're going into a particular website or something attaching certain data to them. So there's a lot of discussion certainly in the EU and increasingly in the United States about getting rid of third party data. Google is getting rid of some of that and it's Chrome platform within not too long. And some advertisers are freaking out about that. But somehow first party data are considered sacrosanct. Publishers have the right, somehow we're supposed to believe to know that it's okay that these publishers take our data and discriminate in the ways that Natalie talked about. I'm not so sure that's so obvious. And I would like to question that whole idea. So the notion of consent is highly problematical. And I think we ought to stop and say, what's wrong with just basic contextual and one off knowledge of people? And then go from there. Natalie, you've been nodding vigorously so I'll let you have the floor next. Yeah, I totally agree with Joe on all that. I do think that they're, so drawing the difference between third party and first party data is not enough, as Joe says, because first party data can still be a whole lot of stuff and a whole lot of really invasive stuff, especially if you're talking about a Google or a Facebook that has the ability to survey pretty much the entire internet. But I do think it's useful to, once you agree on what data can and cannot be collected and basing it in a really strict data minimization idea, as Joseph was explaining, that you only could collect what you actually need to deliver the service, right? I buy something online. They need my credit card information to charge me. And if it's a physical thing, they need my address to mail it to me. But if I'm downloading a movie, they don't need to know my physical address, right? So really thinking about what the specific transaction is and what data is needed to deliver that service and to be clear, I do not think that the service that the delivery of targeted advertising should count as a legitimate business purpose in this context. But so once you have that and you have this data minimization principle and you have a purpose limitation principle, which means that you can only use the data for the purpose you collected it for in the first place. You can't then reuse it for whatever you want just because you have it. Then I think at drawing a further distinction between third party and first data, third and first party data can be helpful. But to me, that's a leader stage that comes after you do that other work first. So, Stephanie. I just wanna add on about that point. I mean, I'm in complete agreement that product should collect what is reasonably necessary for the product to work and for what the consumer reasonably expects out of that product. Now, I think this one statement is actually incredibly difficult to actually define. And it's one thing to say in a policy, but it's another to actually translate how is this going to actually work with cloud stored email, with online social networking, with my home security devices and things like that. I think defining this is actually the difficult part and is sort of where we are in this discussion because for example, permission to access all of your contacts. Like we've all seen that in LinkedIn. Like should that be allowed? Yes or no? And in what context? Cause it's possible that in certain contexts, it's just not, it has nothing to do with the product and it might not help it only hurts by accidentally sending an email to hundreds of people in your address book. And so I wonder if we can get to the point where we can start to kind of like, I guess like categorize like different scenarios where certain permissions are more or less acceptable similar to how people in the field of medicine have done with what's on the floor at CVS versus what's behind the counter. Like have we seen actual evidence for something like a pseudofed type product? Like that's something that's behind the counter. Why? Because it can be used for decongestion for your nose or something else. And I think this is a type of conversation that politicians and lawmakers and lawyers need to be having with actual practitioners to come together and really be hashing out. What is it that the consumer reasonably expects in how a product works? Thank you for that. And Yosef, I'll let you wrap this up. What are we missing? What does this new framework need to have? Yeah, I think everyone made great points. Clearly the beginning issue is defining what data is collected from all these companies based on the products and services they all provide. Beyond that, I think we can also figure out there are clearly some heightened areas of risk and harm that gets to the business model of a lot of these companies. I mean, we've been talking about advertising for quite a while and we know that targeted advertising poses a significant risk and micro-targeting specifically in a wide variety of areas, specifically our political ads, has huge ramifications. So in terms of figuring out what does the consent mechanism look like for these areas, I think that deserves heightened attention. From a consumer perspective, I don't think anyone's asking for micro-targeted ads or at least micro-targeted ads with deceptive information based on data points such as age and sex and location, which get hyper-local in some cases. So how do we address that in this context-specific regime and making sure those are risk-related or mitigated? So one thing we haven't mentioned yet, which is important is enforcement. Obviously any sort of data production law needs to have an enforcement component, but there comes a question with the increasingly opaque data collection practices and the fact that most enforcement agencies are staffed with people like me, which are lawyers, how do you judge what a company is doing? How do you get that precise? What do you need to have to really enforce data protection and privacy rules against fairly sophisticated companies that are fairly sophisticated or at least opaque business entities or business models that aren't easy to police? So Natalie, why don't you start? But I'm sure others have opinions as well. Absolutely. So I think in the current context in the US, in which privacy laws are weak and what privacy laws we have are under enforced, right? The FTC is the primary enforcer of privacy laws, but they do not have enough staff. They do not have enough budget. The staff that they have does not have the sufficient expertise, particularly when you're talking about the number of technologists that they have on staff. And when they have taken enforcement action, it's essentially been a slap on the wrist, right? Like how many, the billions of dollars that they've find Facebook and Google, et cetera, and the consent orders that they put them to haven't even slowed them down a bit, right? I forget what the exact ratio was, but the fines that they impose on these companies are like two weeks worth of profits, right? They can eat that. They can eat that as a cost of doing business. So the status quo isn't working. So what we have at the moment is a really ad hoc and insufficient kind of set of checks and balances that has been largely carried out by civil society groups based on transparency. And I work on it for a project that focuses on corporate transparency, but corporate transparency is step one, right? There's a whole bunch of steps after that. And so the small bite of the problem that we've been off is let's push companies to disclose what they do with respect to privacy and freedom of expression and a bunch of other things. With the understanding that in the overwhelming majority of cases, we, 15 people with backgrounds in journalism and social science for the most part do not have the ability to check whether or not they're telling the truth. So, but once you have them on the record, it is possible for consumers and the representatives to file lawsuits for organizations like Consumer Reports to do technical testing to see if the thing works the way that the label says it works. For journalistic organizations like ProPublica, but again, like Consumer Reports has also been doing a lot of investigative work in this area of testing the systems to see whether they indeed work as advertised. And that's how we've been able to find out especially when it comes to ad targeting systems that they're still doing all kinds of crazy stuff that they said they weren't doing, like allowing people to buy ads that tell you to drink bleach to prevent COVID or come still using incredibly racist ad targeting categories, right? And so there's selective ad hoc under resourced kind of DIY enforcement through journalism, through lawsuits and so on, but that's not a real solution, right? What it has done is that it's made it really clear to more and more people just how badly we need a real solution. And I think that's gonna require a real regulatory agency with enough budget and enough expertise and enough clout to actually hold companies that break the rules which is most of them at this point to account. You're still muted, Sarah. Joseph, what do you have to say about this? You're also muted, Joseph. There. I agree with a lot of what Natalie said. I would only argue two other things. One, a lot of states, and I think the federal government, you can correct me, let me soften me. In the EU, they have essentially monitors within companies that are delegated to make sure that the policies that companies have actually are carried out. So we don't have a required counterpart like that, but we could have something like that. It's a new job for master students. The other thing that is interesting to think about is the ability of individuals to sue within states. And that is very rare today. Today, the attorney general of the state has to make the decision to sue a company, which means, as was mentioned, it's very difficult to find somebody who can go through all these tiny companies and find out what things are going wrong, even the big company. But if you look at Illinois, for example, with the BIP, the Biometric Information Privacy Policy, there, individuals are allowed to sue. And so there's been some very interesting class action suits going on as a consequence of that, which are scaring the pants off for companies. The last thing I would say, and this is an extreme statement, but we have evidence that Americans agree with this from a survey we did several years ago. In really egregious cases, executives should go to jail, okay? And people have to be, there are individuals here in companies who are doing these things. This is not magic from on high. And if we have really egregious examples of this that are done over and over again, individuals should go to jail. I don't see a problem with this. And I don't understand why this hasn't happened. It's always clear as a civil activity in many different ways. But it really, people ought to go to jail. Yosef? It's a lot of good points raised by Joseph. A few things I wanted to also throw in for the conversation is that as Natalie mentioned, our enforcement is only as good as the rules we have when it comes to privacy or the legislation which we have, which is non-existent. So that makes enforcement a lot trickier. When it comes to the FTC, a lot of the enforcement they do is AKA slap on the wrist, but it's based off whether or not companies are lying in their privacy policies. And you can tell the truth and still be incredibly harmful in how you're sharing and using data. I think we have to redefine how we look at that. You know, the FTC has an unfair and deceptive standard for enforcement. And we have to look at what unfair means. When I've talked to FTC staffer, they say, oh, we have to balance the needs of the positive aspects of the practice versus the negative aspects. I mean, I think a balancing approach to something that's clearly harmful does a lot more bad than good. To Joseph's point, obviously, individual rights are important. Private right of action is one of the ways to get more enforcement involved and more individuals engaged. When it comes to privacy, and I'm pure sharing the corporate veil or going after the high executives beyond slaps on the wrist, but extreme penalties is something we should look into. Thank you. So we're starting to come up on time. So one question I want to pose to the group and I'll start with Stephanie for this is when do you think consent could be useful? Like let's say we do have an overarching framework of good use cases we're fine with, the use cases that aren't banned. We have this sort of perfect or at least very good privacy law. Do you think there's still a place for consent? Do you think there's a place to recreate or redo this as something that preserves consumer autonomy and does add value if we had a different sort of framework or a different sort of policy landscape than we do right now? I guess the first thing I would do is sort of shift the word consent to how are we using your data? And I think like the more we can talk about it that way, the less we're anchored to a single moment in time. And I think that sort of the history and norms of how we've thought about consent are sort of they repeat themselves over and over and over and we're redesigning patterns that are again one moment in time. And I think if we can shift the way we talk about how companies are using your data, like it's more of a lifeline of like every single interaction that people are actually every time they're using it, every time they turn a device on, every time they input information. I think that is kind of how I think we should be talking about this whole conversation. And in general, I would say I know we've been talking about testing and lifting up the hood and looking at transparency, which I'm wholeheartedly in agreement on. But I think one thing that we could focus a lot on too is like what good is also happening? What are good examples that we should actually start following because as researchers and academics, we're constantly poking holes and trying to find out like the worst possible example to kind of bring to light. But I think the more we can also think about what does it actually look like to do this better, the more people can start to kind of move in that direction or think of new ideas and be playful about that. So I think that's the other shift that sort of needs to happen along with continuing to poke holes and et cetera, et cetera. So Natalie, why don't you go next? Sure. So I've been trying to think of an example to share to illustrate what I'm about to say. And I can't find a great one. But I think if we can start over from a really strict privacy protective, well-enforced regime, and where the default for all products is the most privacy protective, right? That they really only collecting stuff that you actually need to deliver the service. I think then there might be room for a consent-based model for optional bells and whistles types of features, right? That is not the core functionality of the product. But I think it would be really challenging to enforce and to make sure that these additional services that you opt into sharing certain data to make them work, that they really are optional, right? That they're not the core part of the service and that they're not, that it's not, it doesn't end up being something that's harmful, right? That you're not sweet-talking people into agreeing to something that is actually going to be harmful. So I think hypothetically, there could be such scenarios. At the moment, I'm having a hard time thinking of a really good example, though. Yusuf? Yeah, I think I would just say when we're trying to reimagine consent, it has to have a broader outlook on the entire digital ecosystem because it's no longer just individual transaction between you and one company. So I know we talked a lot about, I have two devices. There's issues of cross-device, data sharing, whether you're inside the house using your wide broadband connection versus outside using a Wi-Fi connection. It could be a same company. It could be different providers. It could be sharing data. It may not be sharing data. Things like that need to get integrated in this consent-based system and just having the transparency of knowing where your data is throughout the entire process to me is important. And then, Yusuf, I'll let you have the final word on this. Gee, two things. I was saying that this morning, I was putting together a small Samsung television set that I bought. And as I was doing this, I had to go through three rungs of consent. And I looked at the material and what was interesting to me, I consented to none of that, but they were trying to make me do it by making it seem that I didn't have much of an option or at least encouraging me in different ways. And so I hit skip that was sort of up on top, but they really made it sound like it would be really good if I kept it. The second thing is a little anecdote. I had some years back, we had a study that we put out back in 2012 where we found that something like 90% of Americans did not want their political campaigns to tailor their ads based upon what they knew about them. And I was on a radio show in San Francisco and with a campaign advisor. And I pointed this out and he didn't disagree with the data, but he said, get used to it, tough luck, okay? This is gonna happen whether people want it or not. It's great to say that consent should not be used. I think that in many cases, it's hard to see how people could meaningfully consent to things that lawyers put together so that they can't understand it. I mean, I've spoken to people who write privacy policies. They're designed not to be understood by laypeople, partly because they're contracts. So if you start with that, how in the world can we say the consent is something that person can actually consent to? And then when you add the notion that the industry has no reason to care what people really think unless they get kicked in their behinds, it's a very, very complicated problem. I would argue that even the EU is going through this stuff. I'm not convinced that the GDPR is gonna stop all this stuff. People, it's the equivalent of bribes that people are getting to give their data or confusion with the EU. So it's a very, very complicated topic that is really gonna be the story of the 21st century. Well, thank you all so much for joining me. This has been an incredibly enlightening conversation for our attendees. Just so you all know, you'll be getting a follow-up email with a recording of this webinar as well as follow-up reading written by some of our panelists as well as readings that our panelists have relied on to prep or think about these subjects. And you can find all of us, I believe, on Twitter. So if you wanna keep the conversation going, please at basically any of us, we will be as responsive as we generally are on Twitter. But thank you again all for joining us. And I hope you have a really good rest of your afternoon. Bye. Thank you. I care. Thank you.