 Hi, welcome everybody. Thank you for joining us. My name is Cindy Cohen and I'm the Executive Director of the Electronic Frontier Foundation. And with me is one of our board members, Jonathan Zichrain. And we're here today to try to talk a little bit about what a positive future for tech might look like. I picked this topic and Jonathan kindly agreed because I feel like we are in a time where people are inundated with all the bad parts of tech and all the ways that tech is disempowering people. It's violating our privacy. We've built this surveillance business model that is tracking everybody. Facial recognition is very scary. There's all sorts of negative things going on, but we're not going to throw away our phones. We're not going to throw away our network. And we need to start thinking about how we get to a tech positive future because I firmly believe that we can't build a tech positive future unless we can envision it. And I've been kind of joking that I kind of want to move past the dark mirror time and back to the Jetsons as a way to get all the people who care about tech passionately. You're listening to us over tech right now. There are certainly people who'd like to throw it all out the window and I can understand why you might get there, but I don't think that's going to happen. So what are the ways we can build a tech positive future? What are the pieces that we need to put into place? I think if we start thinking about it that way, it becomes really clear where the obstacles are and where we need to go forward. Jonathan was very game to join me in this conversation. Jonathan's also the head of the Berkman Klein Center at Harvard, a Harvard law professor and well known for teaching you about law while making you think you're in a comedy club. So I'm really excited to have you here with us today. Thanks for having me. So let's start from my fundamental question. What do you think a tech positive future looks like? I think it would have both nouns and verbs in it. And by nouns, I think of so funny how we have to struggle not to put it euphemistically, access to information, great content. It's weird how much our language has just become this strange VC speak. But underneath that, I think if you pry up the boards, there really are some great things available, things, nouns that you would really have to go to great lengths to try to get. So many of us I think have the experience of you're just curious about something that you heard about or you heard a word you'd understand and be able to just punch it into a machine. And somehow you end up in a rabbit hole that 45 minutes later. Wow. And then you can summarize it with TIL. Today I learned it's really interesting what you can learn out there that previously really would have required a trip to the library, which is still there and waiting for your visit or consultation with an almanac, that kind of stuff. So that is in plain view, but not to be dismissed and for a positive future to be able to maintain that kind of irracular ability seems great with all the caveats that I know come from daring to use a word like irracular. And then verbs, the possibility of creating new relationships with people that you would never have a hope of encountering, either people like you to reinforce stuff that you're interested in and that maybe the people happen to be directly next to you aren't, or people who aren't like you and a chance to see I see the world is a lot different than just what's within my reach. That is one of the great promises of the web and the internet from years ago and it's one that I think persists. And it's something that maybe is available to people who are in a position to go to a multi-year residential university at just the right tender time in their life to be open to new experiences. That's something though that should be available to everyone and there really does remain the great promise of that. And it's just, you know, again I can feel the asterisk appearing in my mind as I say it, we know all the downsides of it too, but those seem to me like two key poles, noun and verb of a positive future. I think that I completely agree and I think that the piece, the third piece for me on all of those is control, privacy, security, that these are systems that serve you and serve people rather than serving other goals. And it's okay to me to have a little mixture of other goals, but the balance needs to be in favor of us. So, you know, we want access to all the world's knowledge, not the part of the world's knowledge that serves an advertiser's interest or and we want security so that when we access other's knowledge or connect to other people that's done in a zone where it's a little safer so that people aren't. And I think that those, you know, to me a tech positive future includes that piece, that piece of user control and protection that to me of course articulates one of the obstacles that we have to have to get there. So to use the weird grammatical metaphor that I started with, it's like to be able to be, to choose where you fall in the sentence or you subject a predicate. And if you're the predicate, you should know that you are an object being acted upon when that is true rather than the subject. And it's true to be able to think of it as an individually empowering, fulfilling experience would be key to that. And it's true that a lot of technologies over the years have been really good at giving the appearance of that while there's a lot going on that makes the deal balance out that are not respecting what people either choose or would choose whether or not they're presented with an actual choice. I think that that's right. And I just feel like that piece is that to me really articulates the work that we need to do. But I completely agree with you. I mean, I fell in love with the internet because of the place that all the misfits, the people who are actually living in a place where they don't fit in anywhere can see a world outside of that and learn things that are beyond what might be available, the limitations of physical space. And I feel like sometimes it's we become so used to this instantaneous communication with people all over the world that's pretty much cost free, that it doesn't seem like a new or cool thing anymore. And that we can lose the value of it, you know, resources like Wikipedia that give you this, the ability to connect, you know, the way email works, no matter what system you're on, no matter who you've pledged your fealty to, whether it's Apple or Google, like your email will go through, right, because of the way that that protocol isn't controlled by anybody. Those kinds of things are kind of, they get hidden in the current world. And I think unless we pay attention to them, they may not stick. Well, I think you're right that years ago, we thought the way to get things done at scale, there's some ways to do it publicly, like building a public road, there goes the information superhighway metaphor, to do it privately, to have Elon Musk provide it, or some other arrangement that harnesses the forces of the markets to do it. And I think the internet has long stood for an additional way that settles nicely in with the other two that has been in between the public and the private, you might call it the community or the unowned. And thinking about a positive future, I think would be one in which we make the most of the unowned space. And right now, again, it's such an interesting exercise to be making ourselves talk optimistically. And it's a number of problems out there. I think a lot of the problems today feel really tricky to solve, because there's not that much trust in the public, in particular, governments or regimes, if we want to call them that, nor is there particular trust in the private. Why is Facebook going to look after me? Why is that their job to do? And I think this extra space that involves our taking on responsibilities and some kind of ownership, each of us, is an underexplored piece of potential solutions. It's not a magic wand, but it's a neat extra area that the internet has brought out for us. And it's also a quick highlight as to the kind of work I think that EFF does has such a big chunk that is defending against impingement upon rights by governments. But there's also a realization that as power finds places to repose all over the place, not just in governments but in private companies, that a source of impingement might be a private company or a flash mob or something else. Yeah, I think that's right. And I agree with you. I intentionally forced us to think publicly, to talk optimistically, because I think so much of what EFF does, so much of what I do sometimes ends up being like, I've seen the future and it's really dangerous and it's really scary. And I feel like we've won that a lot of people who love the internet are really scared about the future now. And so now as people who are trying to help build a better one, we've got to move to the next part of the conversation or else we'll all just wallow there at the bottom sad and scared. Maybe I can ask a question to you, something I'm thinking about myself, which is a lot of EFF's work has been to preserve the right to browse, to interact anonymously, and to build tools that permit that to happen. In some ways I sense when someone is doxxed online or otherwise subject to harassment, part of the implicit justification from the people, if they are people, not a question, doing it, is well, you dared to share your name at all, that was your mistake. And if there's any vision at all from somebody doing it, it might be of a world in which kind of like the way magicians are conceived of as there's such power in a name you would never share your true name. Is it a positive vision to imagine a world that comprises only pseudonyms and blockchain wallet numbers and that we're all at that removed from each other? Or is it somehow good that there's quite quickly a human on the other side with a name and a place and we rely on social grace and on the law that came down to it to make sure that people aren't harassed virtually or physically simply because you know who they are? I think that, you know, my perspective on always about this has been about choice and empowerment, right? There are situations in which I need to be Cindy Cohen, you know, buying a house or there's even other things at EFF, right? I'm the executive director. I need to not be anonymous. I need to be here as a human. Is it buying a house, by the way, is an interesting example? Yeah, no, I realized when I said it. It's like, yeah, get a shell company. Yeah, exactly. And rich people do that, right? I mean, anonymity is available to people with resources and always has been. It's trying to get it down for the rest of us mere mortals as part of it. And I just think that it has to be somebody's choice to be able to do that. And that when society enforces real name, I mean, look, we have a great example of a real name policy. I mean, Facebook has a real name policy. It really doesn't make Facebook a nicer place to be. It doesn't make it less susceptible to manipulation. It doesn't make us any of these things. And so I think there's a bit of a fallacy there that and all the studies, including, you know, CIA and government funded studies about this will tell you that the forcing people to give their identities doesn't make us safer. It doesn't make us more civil. It doesn't protect us certainly against the kind of big scary things like terrorism. So I think there's a people are looking for an easy solution to this problem and getting rid of anonymity is sounds like a really easy one. But I think that it's a doesn't work and B won't do. But I think you asked a slightly different question. I did. I said I was going to say it's well argued counselors. Why the government shouldn't fire people to give their names? But my choice about whether to give a name is of course related to the environment about to enter dwell to most people use their names. I'm just wondering would we like it a world where if you browse a corporate homepage or something, it's meet our team. And you know, our team is hot pants 15 and you know, Cosmo and that that would just be routine, whether it's United Airlines or a small startup or anything online. I mean, in that equilibrium, of course, I wouldn't be the first to be like, well, I'm going to use my name. But what's the right equilibrium if we can? Well, I think that the I would think that the right equal. I mean, I tend to again, I tend to feel uncomfortable telling other people what they ought to do. I do think in the context of corporate accountability, corporate responsibility is perfectly legit to say there needs to be a place for people to be able to go where there's a real person so they can get help. And, you know, maybe they're helped by an AI or a bot, but there's actually a there there in terms of accountability for, you know, if you want to engage in commerce, you have to engage in commerce, or other, you know, there's other rationales where you might want to do that. But I think that's just a small piece of the world. And there's so much else of the world where you don't want that to be enforced. And, you know, some of the problem with doxing to me is that, you know, despite the fact that I've worked really hard and EFF has worked really hard to protect people's right to speak anonymously, so many of the services that were offered require us to hand over this information that is just everywhere, right? And so, you know, your efforts to try to protect your anonymity are easily undermined. Well, it's of course, in a way, potentially the worst of both worlds that I'm handing it over because I'm a good doobie, so I obey the real name policy on Facebook, which could make a target, but then the person attacking is using various layers of anonymity. Yeah, but you know, the good news, bad news is that that anonymity is pretty fragile even for the bad guys. There just aren't very many situations in which, you know, if you had law enforcement or even private parties who were willing to try to help figure out who it was, you couldn't. Those, you know, and sometimes that's bad news for me because it means that, you know, we help people all around the world. We help journalists. We help other people who are trying to get information out of very dangerous, where lives are in danger. And their anonymity is so fragile. We just helped somebody in a foreign country who was getting information out. And subpoenas, you know, the government that was mad at them brought a, I think, somewhat chamois lawsuit in the state of New York. Got subpoenas issued to Google and a bunch of the other ones. And, you know, those, you know, one of the big tech companies said, no, this looks fishy to me. Another one didn't. IP address goes out. The person's life was in danger. So, you know, and some of, again, you know, I feel like those situations are tremendously important. And we throw those away because, you know, someone's not nice to someone on the internet or even situations like doxing, which are much more serious. Those kinds of things are much more serious. Like we have to be able to strike the balance better. Now that's part of the story you just told was a story of a government up to potentially no good in your legal term of art somewhat shanty. And I'm thinking about that and wondering as we're trying to, again, envision a positive tech future. Is it one, in the words of EFF co-founder John Perry Barlow, governments of the industrialized world, you weary giants of flesh and steel. I call on you in the name of modernity. Something like that in the future, not the past. I ask you to leave us. We ask you to leave us alone. And it's funny that the way in which we've thought about the demand to leave us alone would be previously was just don't regulate us. We're working things out here. It's not perfect. But, you know, these are going to be our solutions to our problems. We're building a new civilization of mind. Yes. And the interesting maneuver of the past couple of years by governments has been not to involve the regulatory apparatus as much as just participating in that civilization of mind as anonymous doobies with the great resources of a state and the implacable goals of a state that may not even have to worry about economic balancing the books. And is the world that we are envisioning, whether it's anonymous or not, one in which I know if I'm talking to a state sock puppet? Yeah. I mean, I think it has to be. But those are the things we have to do, right? Because for all the reasons you say, like it's not the same, right? When you're a government, then when you're just another person. And so I think there has to be ways that you understand. I'll say it has to be ways that you understand, you know, roughly who you're interacting with a lot of the time. But for governments, especially hard, the scenario happened to be a government, by the way. But of course, we've seen this in civil cases all the time, where people bring a civil case and an issue of subpoena to the Googles or Facebooks of the world and get the identity of somebody who they're mad at either directly or through IP addresses and things like that. So, but, you know, I mean, back to to Barlow, who, you know, really unmatched in terms of his ability to bring beautiful imagery to to packet switch networking is is this, you know, the kind of fundamental question about whether, you know, the the internet would be better if governments just left it alone. Barlow never really thought that that was going to happen. He was trying to create some space so there could be something other than to envision the world. Yeah. And, you know, he always, you know, he always said I wanted to give, you know, freedom a chance because I knew the forces of repression were going to come. And and so he also, you know, the 1990s, that was actually a pretty reasonable thing to say. Barlow didn't stay in the 1990s and neither did the rest of us. And and and so, you know, just create space and we'll sort it all out was actually not a bad strategy in the 90s. I don't think it it is the what he would say today. It certainly wasn't what he said in the last few years. He got increasingly worried that we were, you know, he didn't like he didn't like any big power having control over individuals. And so if that big power was corporate, he was just as concerned about that. But that wasn't a concern back when we had AOL and CompuServe and and a zillion ISPs, whereas now, you know, most people in America have maybe one, if they're lucky, sometimes two. Well, and that then may offer another kind of plank for our, what does our positive future look like? And it's one that's maybe more decentralized in the provision of services than it is centralized. And we've certainly seen a kind of re centralization. Jack Balkan calls this a de professionalization, and then a re professionalization of the space, but with none of the normal checks and balances on the professions with the handful of platforms that tend to be gatekeeping at the social or content level. And I guess to remind ourselves that there are all sorts of ways to allow microblogging to happen or to have sudden utterances tracked, call it Twitter, that you could do it without a Twitter or even a set of corporate Twitters, but you could you could build it the way that email got built. Yeah, collective hallucination. And I think I think it is absolutely, you know, the the the moment that we needed, maybe it's a little past the moment where we need to start talking about that. I'm excited. Our friends at the Internet Archive are hosting the decentralized Web Summit next week here in San Francisco at the Mint. And there's going to be a track that's long policy along with a lot of building, but you know, the archive and long clients and friends and close collaborators with us at EFF, recognizing this need and trying to build a space where people can come together and talk about a decentralized Web. And then lots of us lots of us EFFers will be there talking about various pieces of it, especially some of the legal barriers. And you know, there's big ones like antitrust laws not up to the challenge of the way the world looks now. There's small ones like the Computer Fraud and Abuse Act and the Digital Money and Copyright Act and end user license agreements that prevent interoperability and competition. And we have to think about all of those things. But I think that the goal of making an online experience that is, and beholden to a few big corporates, it's really important. It's important for society as a whole. For me, as somebody who's primarily got expertise in civil liberties when you're talking about speech and privacy and control and this subset of a really good world is the part that EFF, you know, with our with our valence of lawyers and technologists really have expertise in, they all turn on the same problem. It reminds me too of a very old case eBay versus Bitter's Edge in which somebody went to scrape eBay auctions and Yahoo auctions. Remember Yahoo had auctions? Do you remember Yahoo? And then produce an Uber auction site. Remember when Uber didn't mean Uber? Yeah, I know. And there, you could go there instead of just having it be eBay. And eBay was able to beat that back under state common law. And as a result, trying to build umbrella sites that could pull in little sites at the same time became much more difficult. Yeah, it's more difficult than we've worked on scraping cases all the way through. Now there's a big one involving LinkedIn trying to block out somebody who is who was trying to build one we a few years ago helped a little organization called Padmapper that tried to add, you know, it's not just they're scraping to do something the same thing and they're scraping to do something different. In the Padmapper case, this this person was trying to add maps on to Craigslist apartment ads, right? Which, you know, Craigslist was kind of slow to do that for reasons. And so somebody's like, I can add in and you can see where this apartment is. And, you know, got sued out of threatened out of existence. And so, so for both the, you know, multiplicity of options, but in terms of services you use, but also letting people add build on the stand on the shoulders of giants, right? Yes, extra things. Both of these things are tremendously important. I just want to acknowledge there the complication of that with something like the Cambridge Analytica story, which is, oh, Facebook is really centralized, but gosh, it ought to allow third party apps and scrapers and such. And if it's not going to be a direct scrape, then a user with, you know, consent could click through and say, yes, grab my data third party, third party can build all sorts of cool tools. And then this is why we can't have nice things. And our desire for autonomy and privacy now runs up against in conflict with our desire for a fully interoperable, let the data flow kind of way. Yeah, you know, it's, it's complicated. Yeah, we just published a piece a couple days ago by one of our, one of our texts about, you know, some ways that you can build interoperability and let this happen without running all the way to the, you know, the problems of Cambridge Analytica. You know, right now what we have is these big corporate giants want the best of both worlds, right? They want to give their data to people when to, to third parties when it serves them and make sure that they can't have access to our data when they might compete. So we were in the worst of those things. So there are hard problems along the way, but it's not like they're not sharing our data now. Well, I always thought it'd be interesting if the solution turned out to be a form of, dare I say the word, digital rights management with the user as if the publisher, yes, being able to parcel out the data and even to yank it back or after you parcel it out, it's 10 o'clock. Where did it go to have some form of audit trail? All the stuff that has traditionally been detested when it's some record company trying to make sure that every last year has paid before it hears a song might be turned around for privacy. Well, I mean, there's lots of tools where if users are in control of them, I'm much less worried about them than when they are not. But we also have to face reality that that world is a tough one to build, not that we don't try, but you know, I should say too, I see we've been handed some questions. So those are questions from the virtual world. I also just wanted to remark when we think of decentralizing the web at just the very basic level, the idea of the web is a set of links, you click on something and you never know once you click in whose basement server your bits will then be landing. I think and I got some data shaping up to back it up that more and more this feeling of surfing the web is just going to Amazon, Amazon, Amazon, it's all Amazon web services. And that's a kind of too big to fail sort of as regulatory implications, that's technical implications. Matt Chesiglowski, extremely lyrical guy himself up in the ranks of Barlow, one point said by Amazon's own rate, the sun will be a burnt cinder by the time Amazon ever flips a bit, it shouldn't have flipped. And yet, if it goes down, boy does it go down. Yeah, I think it's a real, you know, for people who worry about the, you know, how the innards of the internet are working. Amazon is a real single point of failure for all a lot of things that we do need to talk about a little bit. So I want to get to this great question. I think it's a good one. It says, do you see a path back to the choose your own adventure style web? Can it be done by the companies running the silos or will it take pressure? Well, I'd be interested to hear more about a choose your own adventure web. I think that's a quote from me. Is that right? Well, then you should unpack that. Who very nicely is quoting me back to myself. Yeah, I said that in a recent interview that I thought, you know, in the 1990s, we had a choose your own adventure style web and now your venture is chosen for you by specifically referring to the, you know, kind of Facebook algorithms that decide then Twitter's algorithms, but Facebook's primarily that really decide what you see. And the fact that, you know, a lot of people, people are increasingly recognizing that just being friends with somebody on Facebook doesn't mean that you actually get to see what they post that Facebook is interposing at something between and picking the things that come up in your feed based on its interests, not necessarily yours. Well, and I should say, would we think again back to the 90s, I think our implicit sense was when people go online, they have some mission in mind. I want to buy a plane ticket. I want to learn about something. I they've got at least they were seized by some reason to be there. Whereas now it's so suffusing, you go online and you're just like, eh, like, entertain me, like show me some stuff. And it calls to my Neil Postman writing about the evening news, which he was not a fan of. He didn't like television, basically. He thought it was a move from written to oral communications that carried all sorts of terrible things. What's the matter with kids today? They're spending all their time in front of screens. Exactly. They were TV screens. Yes. And he noticed, he observed that with the evening news, they'd often try to segue from one story to another in the 22 minutes they had to tell you what was going on in the world. And sometimes they just gave up and the announcer would just pause for a second and say, and now this, and then just move on to the next story. And then it was occurring to me like you look at a Twitter feed or your own news feed on Facebook. It's just, and now this, and now this, and now this, it's a cat. A friend of yours is dead. It's another cat. Putin is visiting. It's just, it's a very weird way that I think we take as normal that is your point about it's no longer choose your own adventure. And it's not to be clear, it's an adventure. It's just helped me pass the time. There's a certain quiet desperation about it that we all share that absolutely is a movement of power, agenda setting power from a user with a goal in mind to some entity with an algorithm that's going to decide what to show you whether it's in your interest or in someone else's. Yeah. I mean, the quick answer to the question, which is, you know, will the companies do it or will it take pressure? I think there's no question that it will take pressure. It's pressure that we're already building and we have, there's lots of people who are joining in that pressure. But now what are we asking them for? When we pressure them, what do we want? Well, I think that, I think, I mean, ultimately, I want there to be more companies and more places doing this so that there is there. So Facebook, you should design policies to allow sibling ventures with interoperability that have their own algorithms. Yeah, I, you know, my, I say, you know, one day I'll, I look forward to the day when Facebook is just a node on the mastodon network, right, as a way to frame this, right, there are decentralized systems and Facebook could be one of them, but you could pick another. The other way is to put pressure on Facebook to give you more control over what you see and more transparency in that, how it is that chooses that you see that that, you know, there's there, we've written some posts about, about how, how we could tweak this. They already let you tweak, you know, because of pressure by us and lots of other people, they give you a far more robust way to tweak the ads you see and some of the other stuff you see, they don't give you so much about, I mean, they let you say, oh, I want to see every post from this person or no post, they give you, you know, kind of marginal things. But I think we just need to keep pushing on that. But on both sides, I don't think that the benevolence of, you know, I would love for Facebook to suddenly decide that it was a public benefit corporation or nonprofit and really turn over into something that was doing that. I don't, I don't know that that's going to happen, but I think we can pressure them to get a lot better. At the same time, we're also working to make sure they're not, they're not so powerful. It's not just them. I think we have to do both. I think anybody who tries to present it is either or is asking you to make a false choice. Yeah, but it's a really challenge to us to say what would the public benefit mean. Right. One vision that you were talking about is a decentralized one, where it's left a little bit more to chance. There's not just one lever with one hand on it, let's say Mark Zuckerberg's, who is in turn in a position to be blandished or regulated by governments, but rather there's lots of levers and some fragmentation and a little bit of just give between the steering wheel and the wheels of society's agenda setting. And then maybe there's something, this is the idea that I've been working through with Jack Balkan, who coined this wonderful phrase, information with fiduciary, about thinking maybe if they're not going to be a public benefit corporation, they're at least going to acknowledge that given just how much power they have in agenda setting over a user who wanders in and is like, all right, show me a feed. What it means to responsibly exercise that power for the user's benefit rather than for somebody else's? I love the term because I think it really does reflect what people expect from a company that they are entrusting their data with. And it's true about Facebook. It's true about lots of other companies that now have access to our data that nobody goes to Facebook and says, I really want to go to Facebook because I want to make sure that Mark Zuckerberg can make a lot of money out of placing ads by understanding everything about me. You want to see pictures of your friends, kids? And so having that pendulum switch back over such that their primary duty is to you, that doesn't mean there aren't business models. There's plenty of ways that you can place advertising without tracking everything. Billboards on the side of the freeway made a lot of money without actually knowing where you're going. So it's not like there's no business models here, but I'd love to see opening up for the business model. And I think that some of the data about the returns on targeted ads is tiny, tiny extra percentages of money that you get by knowing lots more data. And the trade-off between those two for you as a user, that extra amount where they understand who you're sleeping with at night doesn't make them that much more money. So we ought to be able to say, wait a minute, you don't need this. And by the way, location data, if you know where you are at night and where you are in the morning, it doesn't take very much data to be able to figure out who you're sleeping with. I think we ought to be able to say, you know, like that's a bridge too far. That's not you serving me anymore. That's the part at which you're serving me and you're serving your ads switches to the other side. So I don't think they're going to get there on their own. But I do think that there are plenty of profitable business models without the part that is creepy. I think too that talking about the Facebook or Twitter feeds does feel a little bit immersed in the present or recent past, rather than what might be around the corner. I'm curious how much it's going to be the case that five or 10 years from now, like, yep, there's the Twitter feed. There's the Facebook feed. I wonder. And I have found myself particularly taken with these digital concierges with the series and the Google assistance and the Cortanas and the Alexis that you had mentioned something about people's expectation when they come on is that they'll be treated respectfully in some way. They won't be kind of abused. And Randall Monroe came up with a distinction between what he calls tool and friend that we used to use the digital space as a kind of tool. Again, we come on, we type something into a search box, we get results, we're interacting with it. We understand it's a tool that might have limits. It might show us things we don't want. Friend is a different matter. And there's a kind of motion with the use of chatbots and AI like personae to really get people into a mood where they're like, yes, I'm asking my friend Alexa for help with this. And the advertising is very much here's a friend, you'll welcome into your home. And it's just all right, there are certain duties of friendship. And it would not be, you know, sometimes friendship gets monetized. Tupperware party is a phenomenon of many years ago. But like the idea that you'd invite all your friends over and then try to force plastic upon them was, you know, it was an abuse or use of friendship that is like a constant Tupperware party now online with Alexa or something. And when Alexa says, Oh, do you want to do X? I can't tell. Well, wait, are you getting a commission for this? This would be a great time. While there's still marginal curiosities to put down some ground rules about the boundaries of friendship. Yeah, I think that is a great way to think about it. I do think that's how people think about these services. And, you know, the other piece of this is that I, you know, I love tech. Like, I want computers to help me. Like, you know, the, you know, the answer for a lot of people is just don't do it. Don't put one of those in your home. Don't do that kind of stuff. And I don't have one in my home yet because the deal isn't the deal I want. But I, that's, I want a deal that works for me. I like computers. You want the Apple Butler from the 1989 video? Or, you know, the Jetsons. I'm serious about the Jetsons. I know that makes me old, but it's just weird that the Jetsons is a skin on the Flintstones. So it's both a past and a future. Well, it's also, you know, like the gender issues in the Jetsons. Don't get me started. There's lots of bad things about the Jetsons, too. But, so I got another question. We can update it. We can do it. The question is, should the U.S. take an example from the E.U. data laws? And I think that there are lots of good things in the E.U. data laws. And there's some really things that I am not comfortable with. And there's some things that are, that are going to create some tensions in this, that I, I'm, you know, when we're talking about trying to build a world where there's a lot of competitors to Facebook, where there's a lot of other options for Facebook, some of the E.U. data protection laws, we're going to have to get them interpreted in ways that we make sure doesn't make that harder. For your Cambridge Analytica example is a good one. If the answer to that is, well, once you have data, you can't do anything else with it, except in some pretty extreme situations, then if that isn't interpreted to make sure that we also allow people to take their data and go or share their data with something else, then we're going to end up enshrining some of these big companies. You can, you can build a regulatory system where you make sure Facebook is the last social network we will ever have. And I don't think that the E.U. data protection laws are certainly not aimed at doing that. And I think that they are at this point informed enough that we can interpret them in the right way. But if we don't participate, if we let the, I hate Facebook, therefore I want more rules on Facebook, therefore I want to make sure Facebook can't do certain things and we don't pay attention to how those rules are going to impact the next one who wants to come along. We can, we can do the wrong thing. So that's the, that's one of the things that rule that I worry about. The other thing I worry about is the right to be forgotten and some of the things that are trying to, that could have an impact of making sure that frankly rich people can make sure that poor people can't find out what they've done in the past, which is a big worry about the way these things work. You know, I mean, it's not lost on me that one of the early kind of situations that led to the development of this was a, you know, kind of European aristocrat who'd been sure, you know, piling around in Nazi uniforms and wanted to make sure that wasn't on the internet anymore. That's not the only reason. And there's definitely, for those of you watching in Europe, we apologize for the long beep sound that blotted out. Oh, it's true. Exactly. So, so we do need to pay attention and make sure that the, you know, access to information or one of our really clear values here is, is, is balanced carefully with them. But this idea that if you collect people's data, you have a responsibility to them, what you do with it, you, it's not simply that once the data is given up, the user loses all rights to whatever happens to it next, is enshrined in the European data laws in a way that obviously isn't yet in the American system. And so there's, there's lots of things that are good in that too. It's just, as with lots of things, it's, it's not black and white. Yes. On your point about the right to be forgotten, I agree with you. It's a, it is a response to a genuine problem whose implementation is absolutely terrible, especially given that it drops into like Google's lap, having to decide whether something meets this balancing test, it's not being publicly decided. And if Google should decide that it stays, they could face huge penalties if they're wrong. If Google decides that it goes, they face no penalties and it's gone and nobody knows it's gone. That's of course a structural problem in the system. But of course it's a neat moment also to realize that the problem that the right to be forgotten is meant to solve, which is that these search engines gobble up a ton of stuff and spit it back out again. And could greatly affect somebody's reputation. Let's, I mean, let's not be, let's be clear, search engines work, like people consult them for stuff. The centrality of there being only one major search engine, Bing's efforts notwithstanding means that Google is irraculately declaring the sum of someone's life because nobody clicks after the first 10 links. And again, that's still Google as tool. Google is evolving towards friend as are the other search engines where you have knowledge graph on the other side. You know, you left side of your page or the organic results right side is Google's own attempt to bake a quick casserole about that person's life. And it was just yesterday or two days ago that Google declared somebody famous dead. Oh, I didn't hear about that. Yes, it said they had died. I forget who it was. Somebody's probably typing in the comments right now. Yeah, yeah, tell us. But I had a good colleague who a very nice colleague who the Google knowledge graph, the little summary thing was like, Yeah, he died in 2011. And he's like, still alive. And, you know, it's like, should he just click on the feedback link at the bottom feedback? I am alive. And Google's like, Thank you for playing. Like, don't call us. We'll call you. Where's your proof? Right. That's the kind of stuff that the era in which I think we were even sympathetic, I was sympathetic to a Google's claim that, Hey, we're just a window on the web and the web has bad stuff. Don't blame us when we give you a window on the bad stuff. As it shifts from tool to friend, it is taking on some responsibility. And especially if the way you're getting the answer to the question is by again asking Alexa, right? That's crazy. Yeah, I think that's right. And so figuring out I this is why I love the kind of data. I know this is very lawyer geekly. But if you're a lawyer, and you know, somebody's a fiduciary, that actually gives you a set of obligations that you have to your things. And not just lawyers, accountants, and other people have these kinds of duties in it. It kind of is a neat, it might not be the exact same list, but it's a neat way to frame the difference, which I like also is between tool and friend of how we should think about this. And I think that's how it becomes to me, you know, how do you switch from tool to friend? Well, one of them is if you're the only one, then that obligation is different. Or if you're the overwhelming majority one, you know, the those are the kinds of things that go to whether you're one to the other. So that's, I think that that's exactly right. Let's see. Let's go back to my straight bar from my original question. Well, the other one is, you know, we're in a time now when when free speech online is become, well, I actually think it's a there's a there's an autoimmune disorder going on right now, where people who want us to attack our values are trying to frame questions such that free speech becomes something that we equate with Nazis, as opposed to, you know, Nelson Mandela, or any other of the values that we want. And I'm wondering, if you're thinking about how, how do we get to a place? You know, what does our world look like in terms of free speech? You know, our good tech future and how do what are the how do we get to that place given that right now we're really at a time when when there are active agents and we've of course just saw them at hope this weekend who are trying to frame the question as if if you care about free speech, you have to put up with somebody harassing people. I don't think that's true, but I think that they are winning in terms of getting us to attack our own values as the problem. Yeah. Well, there's so many different directions in which to take that question. For the online world, it's much harder in a synchronous space where there's somebody, you know, only one person speaks at a time. And when does that become tile of service or when does violating a code of conduct? And again, in a voluntary collection of people who want to set their own agenda and decide for their own mission, what level of speech works for them. Maybe that group gets to decide online. There's been traditionally ways to say, oh, well, I'll just tune out speech I don't like. But I think there's a sense of the harm speech can cause that isn't just a spammy harm, it's interrupting my own workday, but that, oh, it's this person is telling that person something that will bear on me or on society, and that person is being in some way taken advantage of. And my hope is kind of gets back to the, should states be able to play in our sandbox just wearing citizen hats? I think your answer was a tentative no, with guarding all rights about how to get there. That's at least what I took it to be fair enough. Yeah, yeah. And I would be interested in ways to establish identity and reputation that don't hinge around the provision of a name or a set of physical characteristics or something, except as they might bear on what you are trying to offer for the situation. If you want to be able to say online, by the way, I'm a veteran and this is what I experienced in a theater overseas. It might not be bad that there would be something you could offer. And this is where somebody online just shouted blockchain something you could offer that would say yes, this person served. And maybe they want to say I served in somebody's military. I want to say I served in the American military. I want to say I served from this year to this year to be able to parcel out and validate facts about themselves or to say things that well whatever is true, I will tell you I have never been to Russia and to be able to say that with some form of certitude again without disclosing fundamental identity might be able to help communities form online breast cancer survivors. You name the organizing principle of a community or even somebody then that could gain a reputation by choice as to how they've behaved online. I've put in my time, maybe you should listen to this thing that doesn't sound great at first blush. That could give people the tools they need not to have to experience the speech of others and have their own judged in an and now this way. It's just a kind of stateless utterance that is just another piece of spaghetti landing on the wall. I don't know. It's a very elliptical way of thinking about pre speech but it's about giving people the tools to surround the words with some sense of humanity and community and not letting trolls take up those mantles without having to work for it. I would be very curious. I know there are people working on versions of this the some of the self-sovereign identity ideas that are trying to get at this and I'm curious to see how they develop. I admit that I have a hard time envisioning that and especially envisioning it consistent with the kinds of needs for anonymity that people often have because we know re-identification is so easy. I think it will be interesting to see if any of those kinds of ideas can gain traction and what they do in this brave new world of interactions where it's hard to know and we've got agent profit record tours. We've got people who are really trying to use, we call it trolling online but somebody in looking at what happened at Hope was pointing out that this is what the Westboro Baptist Church does. They go out in front of a protest and they try to basically go up to some line of free speech and try to get people to react against them and then they're the victim and trying to figure out how do we recognize that when it's happened. Who then presents a case which will have the Supreme Court say of the US make new boundaries on First Amendment or not. Which I think can be very, very dangerous because it takes people who care and whatever the politics are. You can imagine the politics on the other side as well. That's not what most of us are experiencing. Turn around and say the problem is speech. EFF has been tracking problems with speech online for the 28 years of its existence and the 18 years I've been there. I will tell you people with less power the ones who get silence not people with more power. This is a weird switch now where some of the people with power are pretending like they don't have power and challenging those things. But that's not how it works most of the time. Most of the people who are silenced by this kind of by asking a corporation or a government to come in and silence somebody or people with less power not people with more. But I think that trying to figure out how to recognize this when it's happening. Call it out because very often I mean it's certainly in the context of hope but in other instances there's behavior going on here. It's never just the speech. There's a behavioral thing and harassment. You don't have a right free speech and harassment. We've long coexisted with these two ideas. One that we protect and one that we don't allow. So it's not like the difference between those things are well seen in other things and I think people are trying to blur it now. But my worry is that there is a successful campaign and I do think it's intentional to try to convince people that free speech is the problem rather than these assholes are the problem. Sorry. They hate us for our freedoms. Yeah. Yeah. So anyway it's a thing. So I'm told we have five minutes. So if you've got questions this is your last time and I guess the other the other thing that that I wanted to think about with you was we've talked a little about free speech but we haven't talked about the kind of intentional misinformation and I hear in your idea that we give some kind of human context to people's speech online that that might be something that would also help with misinformation, disinformation campaigns online and ways to build a way in which people can tell the difference between somebody who's you know honestly mistaken or mistaken and something that is you know not mistaken online and making that easier for people to understand and you know someone is wrong on the internet is always going to be true but it feels like there's some weaponization going on around that and a lot of confusion. Well in some ways this kind of bears on the tool friend distinction I think that when people are online they're going to have a variety of goals or modes in which they're online and you know when you're watching television it's one thing if you know you're watching the news it's another if you know you're watching a football game you kind of know what you're doing the and now this problem means that you're switching channels every five seconds and if I'm arguing over some link that was shared from the Denver Guardian not a newspaper but it looks like a newspaper it's a Potemkin truly a storefront website with two articles meant to look like there's a lot of others but nobody ever clicks through and they only read the headline and it's shared on Facebook Facebook drags the headline in a big photo and it says something false I think it is entirely possible that a lot of people sharing it or defending it don't think it's true or are indifferent to whether it's true it's just a team exercise that if it's something against Hillary and they don't like Hillary then you know if it's something against Trump and they don't like Trump then that's just something they're going to share because it's just fun to do and I keep envisioning like if right before the Super Bowl one year they came out on the field and they're like good news everybody there's been a negotiation breakthrough we don't have to have a game tonight the teams have come to a an agreement over who should win this year and they're going to share the trophy they get the base they get the ball um people would be disappointed like they weren't it was not a truth seeking exercise there it was a game and if you're playing a game then this is just another move in a game and for somebody earnestly to be like but the Denver Guardian doesn't implicitly indicate what they're doing are they playing a game are they actually trying to figure out for whom to vote in the election right and you know if one of the candidates is a murderer that would be useful information as I cast my vote and in that case there are all sorts of telltales I think that could be given some of which are not with reference to the content they're just with reference state of the source the Denver Guardian didn't exist last week just so you know I mean there was a time in eBay when sellers who had just signed on had little shades little sunglass icon next to them which I thought just meant they were cool but it was meant to be telling me that they are shady that they are not really if it's exactly it's a flat screen tv like yeah they got the shades maybe you shouldn't um and I would love to see like Alexa actually have a tone of voice that could indicate doubt so if I say Alexa what's two plus two unless it says four and if I say to Alexa like you know I whatever there's some of the the things that have been led astray on Alexa um is Barack Obama planning a coup was one of the famous questions that google assistant uh back in the day answered yes according to debate.org you know Barack Obama is planning a coup d'etat they couldn't pronounce coup d'etat and like the engineers are like we have to fix the pronunciation I'm like no that's not the problem and it would be you know you could see google assistant being like well here's a sandwich I found on the street you could eat it but I you know like being able to convey that texture rather than the on off miraculousness could be a way of helping and the last thing I would invoke would be of all the institutions whose trust within the american context but true around many parts of the world as well are at all time lows it's like who do we believe in anymore it's hard to find among religious institutions political institutions community institutions uh libraries are still way up there in trust and for good reason now maybe the minute they enter the fray drink that real fast but it's just amazing to me to think that there is there are people who are information specialists they went to school to know about how to find answers to questions and have those answers stick I would love to see a facebook or a twitter if you encounter something to be able to press a button and say huh I'd be I'd like to follow up on that I'm just curious of another shoe drops it sounds like Hillary has killed several people just I'd like to know you know are there more are there fewer did it turn out not to be true and if you click on that um it's uh the the facebook would say I all right we'll get back to you and if enough people are clicking with curiosity about it there'd actually be a team of librarians on call in situ where they are in libraries across the land often waiting for their next customer their next patron and uh they work together actually asynchronously they'd at a distance they'd work with each other and say all right here's this thing claiming such and such and they'd write up a little report and you'd get to see their dialogue back and forth maybe a three librarian panel and then the report would come back to people who had asked they're I mean forced upon them but you know you were curious wouldn't you know here's what it turned out it was the Russians meh meh what are you gonna do and I think that could be a useful system for those who genuinely are curious and are triggered by a moment's outrage over something and that's known as a triggering thing and so the outrage has come fast and furious that could help them out and then at some point instead of it being a three librarian panel make it a two librarian panel with one high school student and the high school student is getting graded on how well she's uh participant in the panel and then you're apprenticing people to the act of finding truth I there aren't many shortcuts to it and I think we've reached as far as we can with factcheck.org and ex-pinocchios I mean that's just part of the frame now yeah sorry I went on a whole rant. That's okay I love libraries and I believe that our tech positive future has to include more librarians and uh full you know we we love them I love that that's part of our positive future thank you so much I know you've run out of time. Thank you. But thanks everybody. All right.