 So my name is Kendra Albert. I'm a clinical instructor here at the Cyber Law Clinic and a lecturer at law here at the law school, lecturer on law here at the law school at law, on law, you know, and I have the honor today of getting to sort of host a conversation with Jonathan who's been a friend and a mentor for a long time on can tech be governed? Which is a easy question that I think we'll be able to dispose of quick within this hour span. So I'm gonna first introduce Jonathan and then we're gonna sort of talk for a little bit about sort of especially given his sort of history and the space and the long history of work that he's produced like how he thinks about this problem now sort of in this current in our Lord, at the year of our Lord 2019, the current trash fire. And then we're gonna sort of I'm gonna, what I'm gonna then ask you to do so I'm giving you some warnings so you can prepare for it is I'm gonna ask you to talk with your neighbors. I know that's not traditional to the sort of Berkman luncheon format but there's so much knowledge and wisdom in this room and I'm excited to tap into it before we sort of go back into a full group conversation. I should note in case you are unfamiliar with the wrote announcement or missed the very prominent sign outside, this talk is being recorded and I believe it is being live streamed. I think it's not being live streamed but we'll find its way online. All right, so even if you're not held accountable for what you said immediately, you may be held accountable for it later. And I guess that goes for Jonathan and I as well so here we go. But just to start off with your bios since you have so many fancy titles that it'd be truly sad to not share them. Jonathan Zetrin is the George Bemis Professor of International Law of Harvard Law School in the Harvard Kennedy School of Government, Professor of Computer Science at the Harvard School of Engineering and Applied Sciences, Director of the Harvard Law School Library and Factical Tea Director of the Berkman Klein Center for Internet Society. He also serves as the Vice Dean for Library and Information Resources. I'm sure I missed some, but. I'm working on something at the dental school but it's not come through yet. So his research interests are ethics and governance of artificial intelligence, battles for control of digital property and like a whole bunch of other stuff which that I actually won't list because it'll take too long. But I'm gonna, you know, we chatted beforehand and I'm not gonna do the thing where I force him to read from his own book that he wrote in 2008, you know. Would you read into the record your previous statements on the topic? But I am going to read it to y'all. So, you know, in 2008 you wrote this book called The Future of the Internet and How to Stop It. Spoiler, I'm not sure we stopped it. I'm working on a sequel right now called Well We Tried. Yeah. And in it, I'm gonna paraphrase your sort of theory. You talk about the power of generativity and generative technologies, generative platforms, ones where users can sort of build their own ways, trod their own paths through building their own things. And at the end of the book, from the conclusion, you wrote, the point at which a generative project, that's my sort of insertion, is worth the effort of bad people to game it is a milestone of success. It is the token of movement from the primordial soup that begins the generative pattern to the mainstream impacts that attracts the next round of problems. Well, the internet has succeeded, I think. You know, it is worth the effort of bad people to game it. So here we are meeting the problems. I wonder how you're thinking on the value of generative systems has changed since you wrote that in 2008. Great question. You know, I think the pay-on I wrote in 2008 to generativity, a great word that I think might have been suggested in a workshop as I was otherwise arm-wavingly talking about how excited I and others were about the future of the internet by Julie Cohen. The generativity thing is about the idea that anybody could contribute to a technology. Now, of course, anybody, oh, do you really mean anybody, but gosh, compared to the status quo, and as you know, the book took some pains to talk about typical consumer-facing technologies and the way in which they were appliance-sized that is kind of like congratulations, here's your technology in joy, but only in joy in the ways that we allow you to. And that as technology gets more sophisticated, the theory went, that could either mean that the appliance-ization and the control by the vendor or whoever can influence the vendor, that control can become that much more comprehensive. It's one thing that's like, darn it, why can't I set my refrigerator to go below negative 10? Like I just, I have some specimens I really wanna keep cold, I feel my freedom impinged upon. That lack of affordance is magnified when the refrigerator can spy on you or can be hacked from afar. And that resulted for me in a lot of thinking and even some scholarship post-2008 about the internet of things and what it would mean. So hold on, I'm gonna do the part where I interrupt you for this time, so we get you started. Yeah, yeah. So what is the, what's the security harm of like not being able to set it below 10 degrees? Is it just that your milk is, you're not some bad attacker, can't deep freeze your milk? So I thought at first you were saying, what's the, how unfree do you feel to be unfrozen? Like how unfree is it not to be able, and it is not that unfree. And we even have come to such expectations, initially just grounded in physics and later grounded in what vendors of products might have for us around. Still grounded in physics. Still grounded in physics somewhat about what the products can and won't do. And so I guess the worry now is kind of when you think of a general purpose PC and a general purpose internet. The PC can be reconfigured to do anything at any time. The internet can communicate between any person and any other person at any time, so much so. I remember being amazed at this because it was in such plain sight but I didn't quite appreciate it at first. There's no main menu on the internet. Talk about what valuable real estate that would be if there were a main menu to the internet. And there isn't. The internet is like, yeah, you're on the internet. Don't look at me, look at whoever you wanna look at. I've connected you to whoever you wanna be connected to. It's that level of generosity, genericness about what you can do that I was so excited about and that we didn't ask for from our appliances. It's like a fridge is a fridge and if you want something a lot colder, buy a freezer. If you want something even colder than that, I assume there's some industrial deep freezer you can get. But as things, this is why again, internet of things is so relevant and remains so, as any given thing is able to be reconfigured into any other thing or set of functions, that is a lot of power up for grabs. And my question looked at to the generative lens and it's a somewhat simplistic one in hindsight was, where will that power go? Will it redound to the benefit of the vendor who's just gonna be able to, I don't know, start tracking your fridge door opening, start selling it to insurance. My cryptocurrency on your fridge processor. That has happened with my fridge. Wait, with your fridge? I mean, this is a problem. No, like with your literal fridge or your model of fridge? I fear I've said too much. But Jonathan's a trans attack surface just went up as a thousand hackers listen to this video and found out what kind of fridge he had. Yes, I redefined what a cold call is in law school term. Too soon, too soon, sorry. But even jumping a level of abstraction higher right now, so much to me of the current story of technology is that it is taking away from a basket of miscellany that we might call fortuity or randomness. Nobody, everybody always talks about the weather but nobody ever does anything about it. Sort of a clarion call about climate change right now but an old Mark Twain quote. Taking out of fortuity, can't predict it, can't control it, gosh, you know, things go on the internet, who knows what happens? It's so organic and that is both scary but liberating next to Walter Cronkite telling us what to think. Movement away from the fortuity thanks to strength in technology and its reach, wherever you turn there's a camera Charlie Messon deposited a microphone over there moments before things started. You never know when there's surveillance or even more important. Sus surveillance. Sus surveillance, yes. Coming back at you and in fact, how many of us are enabling it ourselves by the instruments we festoon ourselves with? So just to quickly finish that thought, we're taking as humanity stuff out of the random bucket and putting it into the, it's now possible to learn, to predict and even to control this buckets. And then we ask, how shall we govern it? And the thing is, we haven't even figured out how to govern the stuff that we could govern before. And now it's like, well now there's that much more and that's why to me, and I'll probably stop talking, the question put dryly and in law school terms of intermediary liability, which is to say some aggregative platform or vendor or entity is in a position now thanks to these new tools and technologies to learn about us and to affect us. What are their responsibilities if we act out? That is now a question that after a 20 year interregnum of not visiting it, we are visiting, wow, really intensely. Sorry. You don't need to apologize. I feel like I'm being all over the place here. So I wonder, you sort of labeled the box randomness or fortuity and I think this actually sort of stems nicely into my next question because I don't know if I would label it that way. I would label it the systemic distributional effects of the system before the technology. You're gonna need a bigger Sharpie. Yeah, it is a lot of, we can abbreviate systemic distribution, anyway. So my question there is like you sort of have, you highlighted that we have the opportunity to sort of start over in some ways but certainly in many contexts, the stuff that was in the box before has translated onto the new technology. I think that your point about the sort of not reconsidering for 20 years is in some ways true but you may know when I'm about to say yes, next, which is there are plenty of people who've sort of been suggesting that it is in fact this very systemic distributional effects that come before that force us to reconsider how we hold accountable these digital platforms that have generative facts. So first on what to label the box, the box of fortuity, to monopoly we would call chance. People still play Monopoly? Okay, is anybody playing Monopoly right now? It's not a very good game and it was secretly about socialism. Oh no, until Parker Brothers seized it. Yes, it was the landlord's game but we digress. But I even, I mean this is actually a fortuitously or not good example, right? The chance deck, if you're playing the game, it's like I don't know what I'm gonna get. If it's chance it's usually not great. Bank error not in your favor. But somebody made the deck. So it's not like the game appeared out of nowhere and I think those two concepts exist at once. That stuff that any given person or entity might think of as previously being in this thing we call fortuity is really, I guess what I mean by it is it felt more immutable. It's not something I can affect, it's just something under which I exist or labor or suffer. And part of the optimism among some quarters early on was cool. Now we can rewrite the game, right? That was so much of the thought of the distributed generative internet including on content too, anybody can blog. That was global voices. That was indeed often the spirit of our center I think was let's not accept things as they are. Let's build and change. But of course who's at the table building is a huge question and the question you just left us with was gosh over the past 20 years it's not like there hasn't been anybody waving a flag here and there. If I had to track in the conventional wisdom and only in the conventional wisdom the trajectory of thinking around these topics I loosely have two categories and maybe a third around the corner and I'll just really quickly mention that. The first category I would describe is what I'd call the rights framework. And that I should, I don't know if it's just a disclosure or a confession as a board member of the Electronic Frontier Foundation. EFF was among the leaders of the rights framework. I see at least one EFF t-shirt in the room right now. The rights framework said the biggest thing to worry about online and I'm just paraphrasing is that our buzz will be harshed who is ours, another question. But this is cool, there's all sorts of new stuff we can do and some of the biggest dangers are governments fearing that stuff they thought they could control is about to be taken and placed into the fortuity box they're gonna fight against this. That's the spirit of the crypto anarchist manifesto of Barlow's Declaration of Independence of Cyberspace and we should talk about Barlow. And we need to preserve the freedom of the space by not, by looking at things from a rights perspective and an atomized for any individual, what can you do? What levers can you pull online or as a builder of code, a computer science person. That was the rights framework and that was the spirit behind what has become just a handle for a bunch of these issues now, so-called CDA 230. I don't know if we wanna get completely into that, but just to say the idea in the American legal framework that Congress would, as basically a footnote, a peripheral item of a larger law meant actually to regulate the internet for the purpose of keeping material that was harmful to miners, pornography, away from them, say also, however, you shouldn't think that if you're at intermediary and you edit stuff that that will suddenly mean that by having dared to edit and come in and take stuff out, suddenly you're responsible for all the stuff you're editing from other people. That's roughly what 230 was saying. That has been seen as a great element of freedom of allowing stuff to be built without worrying that you're gonna get sued out of existence because one commenter did something awful to somebody else. It's also become basically a license to build something, to see the cloud arise from all of its awful uses and be like, not my problem. And that starting, I'd say around 2010 has led to a second framework that uses a completely different vocabulary around assessing the state of the internet. And instead of thinking about it in terms of rights, which is still a powerful language, it's talking about what I'd call public health. Is this hurting people? And if it's hurting people, what would make it hurt people less? And if that could be done, who could do it? And if they're refusing to do it, ought they to be encouraged or required to do something to hurt people less? That's a totally different framework from the rights framework. The rights framework would say, don't have the intermediaries, whoever they might be, be the net police. The other would be, don't let those who build stuff and start the dominoes going, and not only walk away from it is too simple, profit from it in an ongoing way, not have to take responsibility for what they're doing, especially in an era where, thanks to say, good AI, they can, they can't just protest that the internet is too damn big. There's so many posts on Facebook, we can only hire so many people around the world to look at them, it's like, yes, but you could train an AI model, what could possibly go wrong is what the rights people will say. But this is the debate that's kind of joined poorly, because the values and the vocabulary are not well yet mapped. There's no API to allow communication, not only between people, I think, but within our own heads about it in the conventional wisdom. So, I will leave off my third thing. No, we'll get back to it, don't worry. So, I mean, I wonder about that mapping of the sort of rights framework onto our harm framework, because it actually strikes me that there's some pretty big tie-ins to critical race theory and legal theory related to, I'm thinking particularly of words that wound and the sort of work of folks like Kimberly Crenshaw and Mari Matsuda on how do we take these traditional First Amendment rights frameworks and start reframing them to sort of more adequately consider the harm that is being caused. And so, I wonder sort of how you engage with scholars in other traditions or with critical race theorists directly around sort of the places where we've already seen this tension erupt, because I think you're right that there is a rights framework and a sort of public healthy harm framework, although I'm not sure the public health people would use public health in the same way. Well, sometimes literally public health when it's like anti-vaxxers stuff is going through and shouldn't there be some responsibility not to surface it on a search for should I vaccinate my child? But this almost gets to the question. Notice I've been saying in the conventional wisdom, in the kind of canon, and I think that nicely joins the question of who defines the canon and what is the canon? And I think over the past 20 years, there has been and I should only maybe speak around cyber law as a field. It's not like there haven't been people writing from all different angles and methodologies and viewpoints about it, but there's kind of been a cyber law canon that almost boils things down to like e-commerce law and what you should know. Or I mean, notice we've been talking a bit about all the stuff we have yet to really mention a case. I'm actually kind of surprised we mentioned a law. And... I try. Yeah, right. But in the conventional framework, I think there is a tendency and I surely share it too to grasp for the familiar, which is to say what's near you and to reinforce it. And that's why thinking about a research center and its priorities, it's like, it's not, gosh, I'm about to, this is where it's like, just don't finish your sentence, but I was gonna say in... Those are the best sentences. I was gonna say there's of course a rich debate around science and engineering and objectivity, but I imagine there would be people among us who would make the case that if you're gonna learn physics, there's carts that go down hills and all that, and then it's like, let's send you to the history of science department and you can have a frank exchange of views. The debate on what a cart is and what a hill it is and why we're asking about carts and hills. And yet at the end of the day, the bridge falls or not. And again, you were even saying at some point physics kicks in and there's such a thing as physics. Some would say. Or would they? I won't tell the historians of science. Right. But in this... I think it's probably actually STS. In this field, I think given that so much of the environment in which we exist that is constructed by, mediated by the technology is built by people, even though there's no one person who's like, yes, I built that, unless in our era of concentrated power and software, it's like, well actually it's Mark Zuckerberg and like four other people and here they are. All right, that's something to talk about on platform regulation. But the fact that it is built by people creates such at least a clearer and more obvious, I think to a larger group of people, way of saying to them, this stuff doesn't have to be the way it is. And in fact, part of when I found my own excitement rising even in an era where there's less to be or at least juxtaposed to the excitement, a lot to be mortified about has been to ask, not just here's a phenomenon, how do we regulate it? How do we ban what parts of it do we allow or not? And again, who is we here that could credibly be doing that? But rather, what if it acted entirely differently? And I gotta say for me that has meant maybe a years long immersion in my own bandwidth into thinking about how you'd construct stuff differently and in particular the differences between centralized and distributed. Now centralized and distributed is still, it's a network architectural question. It can apply in lots of different areas. I don't know that that's still engaging with critical race theorists, but it is possibly bringing to the table, it's not just again, how do we assess this and do we like it or not? But like what would we build? How would it look different? Both in the technology and institutionally, the configurations, because it might be that the technology could support new institutional configurations at a time when it's not just like the tech is letting us down, it feels like everything is letting us down right now. And a lot of the questions of internet governance are reflected larger questions of governance with a capital G. I think that's right. And I think that you already raised this question, but I think I wanna, well, sort of close maybe our one-on-one discussion with that is like who is the us, right? Because I think one of the major critiques of the sort of even my own characterization of like now is a trash fire, right? At the beginning is that for many, many people, it's always been a trash fire. And I think that actually that's, I sort of looked back on the history of cyber law and sort of looked, I think I remember sort of being surprised in myself as someone who entered the field in roughly 2011, finding that folks have been writing about race and gender online for literally, since as long as being online had existed, but that work doesn't feel like it had really penetrated as much of the canon, right? As you were saying of cyber law until relatively recently with Sophia Noble's work and Rooha Benjamin's work, who's gonna be coming and speaking in two weeks, which I encourage everyone to come to. Jerry Kong, 20 years ago, yes. So I wonder if you can talk about who you think tech has governed for right now and how that informs what you do going forward. Well, at the risk of generalizations. I just invited it. Fair enough. Tech is produced for who can pay for it. And if there's another area that somebody wanting to be integrative around internet and society would be thinking, it's actually the microeconomics of the space. The, perhaps even by design, boring and Byzantine ways in which the act of looking at something triggers, as I put it in, I think a piece that has yet to be published. Spoilers. More computational effort to do something with that click than the Apollo command module had. Again, it's taken out of the fortuity basket. It's like you didn't be able to look at that. And again, by you, I mean, let's see, is it, it's probably somebody's mobile phone or maybe I've said too much. They're here. And that microeconomic story is a really important one because if we're talking about, and have yet to resolve again who to be, what we want the space to look like, it's really hard to just make it so. There was a time, what, 2005 or around that era when it was like Wikipedia was the point of a spear that was going to reconfigure how people interact with each other, how knowledge is generated. And then it became like, it turned out it was just like an arrowhead. Like it was, it's like, where's the rest of the spear? Wikipedia works in practice, but not in theory. And then the next thing was like, and you know what, maybe Wikipedia doesn't work so well anyway. At which point it's like, now what do we do? And it has not, I have not seen myself a more open and welcoming time for people to contribute to this field. I have not seen a time of less certainty about what the canon of the field is. Among my colleagues, I have not seen them as puzzled as they are now, and I count myself among it. And that is, in its way, inspirational. It's a moment, at least in the academy, but I think also in the public at large, of some deep-seated ambivalence about what we're doing and to be able to make something of that moment and to integrate mastery of multiple fields, including the microeconomics I was just talking about with the critical race theory, with the network theory, with the people who can build stuff and say, let's see if it takes off, because it's still possible to build pretty much anything you want and put it online and see what happens. Let's see what we can build together. It's certainly my highest hope for a research center like ours. So maybe one more thing I should say on that front, which is a kind of aim that was general and present but feels more specific and urgent in the wake of the situation going on with MIT, is having a constellation of centers that are, in the words of David Weinberg, are small pieces loosely joined, something that our center has been working on, a network of centers around the world, so that you don't have all your marbles in one basket and as much as you try to integrate under one roof, as many views as possible. There should be more roofs, roofs, I don't know. I mean, I think that your point about the sort of interdisciplinarity of these problems and the way in which traditionally the cyber law canon has not necessarily been super receptive to that interdisciplinarity is a great point. And I do, I mean, I think that you're right, that the many centers feels like a way to mitigate some of the potential harms of sort of bad actors at any one particular center. I do think that like the, and I'm gonna speak for myself and not for you and not for Berkman or Harvard Law School or anybody else really. I think there was a sort of reckless techno, like people have used the term techno optimism and I think that's fair. I actually, when I went back and read the conclusion of your book, as you may remember, opens with the discussion. I regret the subsection entitled reckless techno optimism, that was reckless. It opens with the section on that, the thing I quoted from opens with the section on Nicholas Negroponte and the sort of power, a generative power of the laptop purchase. Although I think there's some skepticism. There is some skepticism, I would give you that. I think that for me, what I take away from that is that, those questions of harm of the public health model, that just a rights based model is never going to be enough because you're always trading off the rights against something. And that I think that the, there's a sort of way in which an early sort of techno optimist perspective was, we're not just gonna like throw away, we're not just gonna change everything. We're gonna change everything and there aren't gonna be any drawbacks, right? That there weren't gonna be costs associated and that's, wow, there goes my phone. And that seems like one of the sort of striking things that we're dealing with now. Well, I should say, certainly in my thinking around generativity, there's one footnote in the book that I might be proudest of. I know I'm an academic when I say that. There are actually end notes. It's the kid in footnote that really, yes. Note, I'm most proud of. Talking about as I'm extolling the virtues of generativity and isn't it cool that anybody can do anything and nobody can really stop them. There's a footnote to I think a New Yorker piece called The Kid Who Built a Nuclear Reactor in His Shed about like I think a 12 year old kid who built a nuclear reactor in his shed. And it was kind of an end note to a paragraph was like, is there such a thing as too much generativity? And that's even taking into account, of course, a generative model, which is it yields catastrophic success. Bad actors show up for which my solution was we need a generative defense rather than expecting somebody from on high to help us. But separately, before the bad actors show up is just when is there too much generativity? And as the power of the movement of bits has grown and has become so much more integrated with the physical world, it's starting to move towards the nuclear. And I just, I want to acknowledge that. Maybe you should end on though is... You keep trying to end it and I keep... All right, so here's the thing I'm gonna say though about risk taking. Because risk taking is the kind of thing that on an innovation checklist or even a how to make an institution or polity or anything thrive checklist is take risks. And I think it's, I won't speak for all scholars. I couldn't possibly, but I'll speak for myself in a scholarly mode, taking risks means not just writing a new piece on your existing theory that nails down, yeah, one more piece of it or a case study further to my generativity or which we haven't talked about but could information fiduciaries and loyalty by companies backed up by law but rather are you willing to study and spend time with and write in areas where honestly you're gonna be a student again. And when you deploy all of those fancy titles as the very first star footnote to an article indicating the authorial affiliations and then say stuff that's gonna be quite literally Sophomoric, that's a form of risk taking that at once I can see wanting to encourage, get out of our comfort zones and at the same time is when is risk taking recklessness especially when that translates to let's do this project and this project carries with it some real risks. It's like something, something, something Iran, something, something, something. All right, well, they're on an export control list and there's all sorts of but it's like and so it's mindful about that. Probably not only trying to be most in touch with one's own compass but getting radar pings back to totally mix my metaphors from the compasses of others to do it and to acknowledge when you need to make a course correction. Thank you. So, you know, we've covered a lot of ground and there is a lot more to cover and I'm conscious that we, I think, have roughly 20 minutes left together so now I'm gonna turn to the audience participation and not the part where somebody puts up their hand and asks a four minute question that's actually a comment. Love y'all, I know the community I'm just saying. President Company accepted, of course. Me ask, give in-depth comments that are supposed to be questions. Everybody's company, yeah. So what I'm gonna ask you to do is turn to a person or a couple people next to you and first I'm gonna ask you to introduce yourself and then I'm gonna ask you to sort of either you can take up the core question of what was advertised on the tin of the talk which I'm not sure we gave you which is like can tech be governed? Although I think in our own way we have answered it maybe with Better Rich's Law which is to say no. No, I think the answer is it has to be. It's that we must assume it can be and work towards it while having the humility not to think that we're just running an ant farm here. Okay, well we can talk about it too. Fair enough, yes. So you can either take up the question of can tech be governed which is a big one or sort of any of the sort of smaller questions that we've embedded which is like what fields feel like the most sort of relevant to bring into these discussions going forward? Which of these sort of problems feel as most tackleable from an interdisciplinary lens or like just raising other questions that came out? So I'm gonna actually give y'all five minutes to do that and then I'm gonna try to get us back together for a full group conversation and in the spirit of Berkman, Wikipedia and formerly the Bumblebee although people now know about how it flies. I'm gonna hope that despite not knowing whether this is gonna work that it will and it will result in good conversations and I'll see you back here in five minutes. I can tell that there are lots of amazing conversations going on but I'm just gonna continue to speak into this microphone to interrupt you until some of you, John Penny, I'm talking to you stop speaking. So one of the great things about the fact that this is the beginning of the year and this is our kickoff event is we actually have lots of time to continue these conversations but first I'm gonna be nosy and wanna know a little bit about what you're saying in the conversations and so I think I preceded some, I won't say volunteers. I've been told some people that I thought they were gonna have interesting thoughts and that I would enjoy hearing them speak and then I sort of will move to maybe a slightly more actual volunteer model. This is academia so volunteer is kind of how we do things. I'm gonna go over here first and ask if there's a group from, so what I'm gonna, you are not required to have a question, I just love to hear what struck you about your conversation or any sort of interesting things that came out of it. Hi, Jesse Daniels, it was all very interesting and we had a good group, a lot of governance in a group and several people raised the issue about black women being attacked on Twitter and sort of as a case study of how do you govern given that and how do you govern and put black women at the center of those who are being harmed and one of the other questions was about sort of the imbalance between the resources that corporations have that are running these platforms and civil society who's trying to do some of the intermediary work of governance so that was where we were. Do you wanna just, I know that sort of one of the things that you wrote a lot early about Jonathan was IETF and the sort of like kind of consensus, rough consensus running code model. I'm wondering, given the group's provocation around the sort of variable resources, if you wanna talk a little bit about sort of how you see that changing, that kind of like very democratic in a traditional sense meaning it was mostly white dudes process. Well, it kind of gets back to the distributed and centralized point. If we were still in an era in which the biggest architectural decisions about the digital space were being made say through the auspices of something called the Internet Engineering Task Force and what I love by the way about our community is there's gonna be people here who like are totally still into the IETF and are part of it. It doesn't have members but it has people who participate and there are gonna be people who would be like IET what and back in the day that was the group that helped work on and came to consensus, rough consensus on internet protocols, the basic unowned protocols that anybody would be entitled to build into their software and hardware so that the stuff could interoperate and of course what those protocols permitted would have a huge impact as we like to stay when you wanna sound highfalutin about it all the way up the stack to the applications and to the content and to the users and a decision down here about okay is there gonna be an identity bit put bluntly? Well, under a rights perspective you can think of all the problem as I gotta carry my internet license with me when I'm on the internet that doesn't sound great and then when you think about accountability for harms it's like well I don't know it was bits that did it does not sound like a satisfying answer to the problem of abuse but that again it's a moving target because we now while we still operate through protocols blessed by the IETF and adopted by vendors and others building software it's you know all right what's happening on Twitter and Twitter is I'm using an app and what I see in Twitter is what Twitter says I'll see and I gotta say from the point of view of a research center many of whom are alums are working at Twitter in different departments we are some of my best friends are at Twitter right absolutely and how to interact with that corporate sector because the era in which I mean they might be there have been times when they'd say all right we're ready to give you a million bucks and you know some other folks that could use a million bucks coming from Twitter and then Twitter can feel better about what it's doing I don't mean specifically Twitter of course I mean you know the entire corporate internet sector they're willing to do that but then it's like well do we want that money does that affect policy recommendations we these days tend not to take that money and okay well then how do you interact with them well ideally as peers across the table and as ones who can and the true internet spirit of the way to get online is to find anybody already online and just share their access that is literally how all internet access works right there's not some central internet switching station that puts us all online it's all by getting online with somebody already online including ISPs you could bring people to the table that way but are they gonna share data with us how do we know the scope of the problem now we could do the Pew survey approach or the ethnographic approach and hear from people harmed but you'd wanna compliment that well what do you see from the what's the right the air traffic control tower the prison tower that is Twitter central looking down on all the uses with a unique view that only they have and in this current environment getting them to share data is both inappropriately and appropriately depending really hard nay impossible to do there were tentative arrangements with some academics to study this stuff but again now we all put on our privacy hats you share data what now or you put on your GDPR hat and you're like if you're Europe you're like you processed what now that turns out to mean from a corporate risk perspective no when we say risk taking that's not what we mean safer not to work with the academics or anybody for that matter that's a real problem and I don't have a solution for it but I find myself still working really hard for the benefit of our center and those the research we could do and the students here and others who want to be able to work on real data how to make that happen to me is one of the big almost library in our time I thank you for that and I want to actually come back to what the sort of group was talking about before which is kind of like bless you sorry the unique experience and like centering the voices of like black women who are harassed on Twitter I think that sometimes there can be a tendency and I've seen this in myself to look at marginalized groups as sort of canaries in the coal mine like oh they saw it first right and then you know they can predict the outcomes and I do think there is a benefit to that which is that it often does actually require people to gauge substantively with the experiences of marginalized folks online especially of women of color and black women but you know that the end of the story about the canary in the coal mine is like not a positive one right like I'm pretty sure the canary dies I love how the canary has like a tag on it that says the future is here it's just not evenly distributed right you know the canary in the coal mine is the future is here it's not evenly distributed and so particularly apt with tweets and Twitter the canary so I you know what I want to say there is that I think they're you know in our desire as researchers and as a center that does do interdisciplinary work and take disparate pieces and put it together it's so important not to think about that as oh what can we gain from this person or what can we gain from this experience to speak to everyone else but rather to take you know take seriously the idea that each individual person's experience that the canary like the canary has as much right to continue to live this metaphor is shitty I'm gonna stop using it so I just swore and I didn't ask if that was okay oh well it talks to me about the importance of synthesizing experience and statistic there's like a real you know obsession with big data these days and what we can learn thanks to new tools and thanks to the data sets including learn for the sake of understanding better the parameters what's really going on online but that alone in the absence of actual experience and being able to hear may all resemble others online including literally each of our we're using Twitter we get back from it that just seems to be really vital and a reminder I'll just say again personally about how to temper the joy of great a new data set of harms this is so cool it's like wait a minute it's it's it is productive and useful but gosh like just stop for a minute and yeah well speaking of hearing from folks ooh the timing time gets away from us as it always does I'm gonna take one last comment and then I think we're gonna wrap up so I'm gonna take it from this group over here since I so unkindly called out John Penny already go ahead hi thanks for the wonderful talk so we are discussed this question from a comparative perspective and the global test global context because I come from China and I'm a visiting scholar here so I just told Joe and my friend about my research project is a Chinese social credit system so it's more like the Chinese government to use the big data analytic tools and the algorithmic technologies to apply in area and that in that area they will collect the information data from the citizens and give you a scoring results so you have you will have a credit score so they ask me you know when we talk about can technology can be governed so in China it can be governed or not yeah I think this is very big and challenging question right so yeah I think because from China's contact it's a little bit different from the western part so because the technology play much much more and role in Chinese development purpose so it's you have some relationship with the prosperity with the country so when you give more meaning on technology in this sense so it will make this governance issue more complicated and more challenging so I think if we talk about why the technology can be governed we should first figure out what kind of the barriers for this question so I think in maybe Asia context or China's context there are several barriers the first is you know the knowledge gap and the second is the awareness of the citizens recently there is an optimistic trending like you know because of the social media also the western social media news you know so the Chinese internet users have much much more awareness of privacy than before so now the agencies who deal with this issue now issue more regulations than before try to protect the privacy of the citizens and also you know the pacing problem because the legislature always chasing from those challenging issues so this basically what we discussed yeah thank you and it sort of also seems to raise the question you know if we're talking about can tech be governed how do you like you know who watches the watchers like what you know if you are using the technology to govern then then you have a whole other set of questions associated with that so I see we're almost at time and so I want to offer Jonathan if you have any concluding thoughts yeah I mean it's both inspired by the last comment and maybe kind of a nice statement of a piece of a research agenda for which I'd certainly welcome help which is as the technology gets more powerful do we accept that it's going to turn the dial up on control and then it's just a fight of how to govern it so that the control is responsible and that the right outcomes happen if we can agree on what the right outcomes are et cetera et cetera or is it somehow this kind of Canutean can we just try to push some stuff I have no idea what Canutean means oh King Canute no oh it's I well we're not going to get into that okay there's no fighting city hall or the waves is it well actually let's change the technology to somehow try to put stuff back into the fortuity bag no one this is gosh this is now going to be a terrible reference but here we go no one should have the arc of the covenant it should be put into a warehouse never to be seen again that was that's Star Wars right Kendra was trying to get me to say actually I actually it was mostly just doing it for the look on his face which was like sheer horror but right it's the end of the move it was the end of Raiders of Lost Ark of like this power is too great it's the ring it's the one ring is there some can we just put some of this crap back into Mount Doom and I think the answer may I don't know the answer but I think it may be like good luck with that once you reveal there can be a one ring and you actually forged one someone something's going to want it and so there's an institutional design question how do you distribute that power you know or not have one ring have many well that didn't work either but versus is this too much power for anybody to have given what we know being mindful of history about how power accretes and there is a lot of power in this institution in this space spent well or not a lot of debate around that but if you're in this room you are part of it or proximate to it and I mean that both for the kind of warning that it sounds like it is and that I'm trying to take to heart and for the opportunity and responsibility it represents for us to learn what we can express what we can and through our corner of this university the Berkman Klein Center there will be a science fair upcoming where you can learn about the ridiculously broad kaleidoscope of projects taking on so many different pieces of this puzzle and have a chance to see where you might want to fit into it and I really invite you to do it this center contains multitudes and I hope you'll be among them. Thank you. Thank you everyone. Do you want to actually announce the time? Oh, thank you. The open house is September 24th at 5 p.m. I think somewhere around here. Hopefully we'll see you there. Milstein East ABC.