 Welcome to the breakdown. My name is Oomu. I'm a staff fellow on the Berkman Client Center's Assembly Disinformation Program. Our episode today features our very own Jonathan Zitron. Jonathan is the George Bemis Professor of International Law at Harvard Law School. He's also a professor at the Harvard Kennedy School, a professor of computer science at the School of Engineering and Applied Sciences, director of the Law School Library, and co-founder and director of the Berkman Client Center for Internet and Society. Thank you for joining us today, Jonathan. It's my pleasure. Thank you, Oomu. Very good. So our assembly program is wrapping up for the 2019 to 2020 year. And Jonathan is on as the faculty advisor for the assembly program and also as a co-founder and director of the Berkman Client Center, of which the assembly program is based. So Jonathan, can you talk a little bit about yourself, a little bit about the assembly program and how it came to be? Sure. At one point, we had gotten word of one of our fellow universities getting, on rather a brunt notice, a $15 million grant to improve the state of cybersecurity. That's a lot of money. And we were certainly thrilled for our peers and then couldn't help but brainstorm, gosh, if we unasked had $15 million appear, which I won't say has happened yet, what would we do with it? And how would we deploy it in a way that did justice to the confidence of whoever would be entrusting us with that much money? And what emerged from that discussion was a sense that, in some ways, the reach of academia is limited because the only people at the core of academia are academics, like the only people who write books are writers, by definition. But what if the experiences of people who weren't dispositionally inclined to sit down and write 250 manuscript pages also found their way into books in the first person as narrative? Well, then you'd have people who weren't writers writing. And what would it mean to have people who weren't just academics in an environment true to the highest ideals of academia of solving problems, of examining questions and our own assumptions about answers to those questions? What if you could bring them together in our space, at first with cybersecurity, later with the ethics and governance of AI and more recently on disinformation? What if you could bring them together around these really hard problems that transcend traditional disciplinary boundaries within academia and that transcend the ability of any of the actors that maybe are most in the position to do something about them? It's kind of out of their lanes, too. Like, classically, do we want Facebook unilaterally deciding what's true and false? By Facebook's own account, even Facebook does not want to be doing that. And they're right. They shouldn't be. All right, well, then who, what, what relationships? So capturing those sorts of questions, a problem that is big, possibly getting worse, having very significant impact, but no one party or even group owns trying to solve it. What would it mean to try to gather people around that and work on it? And our first efforts were generally on cybersecurity and more specifically on what we call the going dark problem as framed by law enforcement, especially, that a bunch of stuff that they used to be able to get if they could manage to get a warrant, like access to the contents of your cell phone, are now maybe beyond reach. Because if you're not willing to cough up your password, a big if, to be sure, because if they've got you, maybe they can get the password out of you. But if you're unwilling to cough it up and they really want to get in there, even though they have the warrant, they don't know that password, 10 tries and it vanishes, that's seen as a problem. And our group, which included government officials, civil libertarians, academics, human rights folks, had really good discussions about that and ended up, in that case, pretty out of report card, don't panic, explaining why, while you can come up with an example of a mobile phone or as the district attorney of Manhattan put it, a whole room full of them that you can't get into and with your warrants, you should be able to, there's also a whole sea change going on in the world in which we have all these devices, like our webcams and our mobile phones that could be with a warrant or other legal process designed to turn on and surveil us all the time. And yeah, there's a bunch of that. So in a way it was saying to law enforcement, don't panic. And to the civil libertarians, maybe you should panic because there's a bunch of other fronts on which to worry. So that's just an example of the sorts of things our group came together to do in that instance. And in the intervening years, it's taken up other issues as well. And most recently, as you know, we've taken up the problem of disinformation. What, how big is it? How bad is it? How would we measure it and know if it's getting better or worse? And who, if anyone would we trust with an intervention designed to do something about it? And I should say quickly, the assembly program as it's evolved has roughly now three pillars, three tracks. One of which is involving our students at the university and figuring out ways as you have graduate students looking for thesis topics across multiple departments or you have students like law students looking for meaningful clinical applied experiential work rather than just theoretical or doctrinal stuff, coming up with problems that they can lend their talents to and having them come together as a cohort to do independent work and meet faculty from other departments that they normally wouldn't have a chance to come across. So that's the assembly student fellows. And we also have the assembly fellows who are people from industry and outside academia, nonprofits and NGOs who are in the trenches, they're working day in and day out. Doesn't mean they're running a particular company, but they're the people within the engineering rooms of those companies trying to make a difference. And by calling them together, having them spend some time on campus here full time and then scatter again, having their companies give them a vote of confidence for their professional development, but also a vote of confidence in the kind of what you'd say as a lawyer, pro bono work, having them work in the public interest with one another on solutions that might well require industry cooperation or standardization or interoperability. Bringing that group together along with the academics can maybe yield something interesting. That was the premise. And for now several years, our assembly fellows have bonded as a group, done multiple projects and presented those projects, some of which persist today with their own lives independent of the assembly program, thanks to their work. And then the third pillar is what we call assembly forum and that's trying to get some of the senior officials, the senior executives or their representatives of companies who are thinking of the corporate or governmental policy layer about what should be happening and who should be doing what and get them talking with one another and kind of setting the standard of trying to have insights or ideas that they wouldn't get in their own natural environment because those are people that might well be thinking about this kind of stuff all the time and trying to be able to get them to see it from a new angle can be a nice sort of hurdle to set for ourselves. So those are the three pieces of assembly. So the assembly forum, which is the piece of our program that is for experts across sectors. I mean, I just thinking back over the course of the year we covered a lot of ground. The first discussion in October, we grappled with problem ownership and we really tried to pin down definitions to the terms that are most commonly used in the space. And then as the year progressed we tackled issues around disclosure, impacts, like how do we know quantifiably that there was a causal link between a piece of false content that is online and how someone goes and behaves later. Are there any issues that we discussed over the course of the year on which you maybe experienced a perspective shift, had your mind changed or maybe did you think that you changed someone else's mind? Huh, I wouldn't bet on that. But I certainly found my own thinking deepened and changed on some things. I came to an appreciation from our discussions. First of, you certainly can't just assume that disinformation is a scourge or undifferentiated disinformation just across the board, it's terrible that some of the slicing and dicing that academics are want to do and that we found some of the companies are doing too is they're trying to operationalize measuring and countering it where they want to wade in. It really makes a difference to figure out, well, all right, what are we defining as misinformation? Even, I mean, to some listeners, this may be a kind of new distinction to everybody who was new at one point. The difference between misinformation and disinformation. Absolutely. With misinformation being, oh, you just got it wrong and disinformation being like you are wrong. You're trying to get other people to get it wrong with the latter being propaganda. And even that isn't sufficient because you would think that, all right, if some government cooks up a piece of disinformation in a lab and releases it, that is the disinformation. But if somebody repletes it credulously, they really believe it themselves, they're engaging in misinformation with the disinformation they got. And it might well be that you're, if you're a platform conveying that or amplifying that speech, you would react to it differently if you know the actor is intending it versus the actor just being a credulous vehicle for it. So being more careful and precise so that we can cut to action that more narrowly addresses the worst aspects of the problem seems to be really useful in a way that just otherwise makes the problem feel so inchoate and overwhelming that it's hard to even start with your spoon scooping out the ocean. And I think that in the particular instance of politicalness and disinformation, there's some really interesting questions where if you have a platform like Facebook or you have a government intelligence agency that's charged with protecting the nation, looking for threats and they see, here's another government and yep, they are absolutely trying to salt these falsehoods and whether or not even they're false, they're trying to make it look like whatever is being said likely false. It's coming from say fellow Americans, now what? And you would think, well, at least you should say what you see. If I'm on Facebook, I would prefer that if I saw something that was supposedly from a neighbor, it turns out it's from somebody thousands of miles away getting paid by their government to trick me. I should know about that, but it's very complicated. And one of the hypotheticals we entertained as a group was, all right, suppose the government, the US government, absolutely with great certainty can say, here is disinformation, it's coming from this other country, it's targeting this political candidate, do you tell the candidate? If you tell the candidate, what do you tell them? By the way, like another country has it in for you, that is all. Where you're like, here are the specific posts and then do you tell them, by the way, it's classified so you can't tell anyone else. Why did you tell them? What are they supposed to do with it? And if you tell everybody first, well, does that ruin your source or your method? And second, even if you could tell them without having to balance that against it, are you maybe doing the work of the adversary? Because now you have people questioning whether everything they see is in fact, foreign propaganda. Those are real questions. And I'm not trying to have answers to them all, but thinking about how we will, when some of us know what's going on and are prepared to share it, or have an inkling and aren't certain and maybe wanna share that lack of certainty, what's the right way to do that general versus specific that advances the cause against disinformation? Like that seems to me a better articulated question than I had when I was going into it.