 Hi everybody, I'm Sarah, as Daniel said. Thank you for being here. I'm very excited today to introduce Ian Bogo. Ian is the Ivan Allen Distinguished Chair in Media Studies and a Professor of Interactive Computing at Georgia Institute of Technology. He also holds an appointment in the Scheller College of Business. He will be in dialogue with Jeffrey Schnopp, who is a faculty director of MetaLab, a co-director of the Berkman Klein Center of Internet Society, and he is the Pesco Solido Chair in Romance and Comparative Literatures here at Harvard. Today, Ian will be talking about a pessimist's guide to the future of technology. As we were just discussing, pessimism is clearly popular based on the turnout today, and rightfully so. A vote of confidence. And rightfully so, in this age of celebration of technology, it's really important to have the critical in dialogue with the celebration and the embrace of technology that we have here at the Berkman Klein Center and our extended community. Ian is also the author or co-author of 10 books. He is the co-founder of Persuasive Games. He's a game designer and scholar. He is also a contributing writer to The Atlantic and is a co-editor of the platform series platform studies series published by the MIT Press and the object lesson series published by Bloomsbury and The Atlantic. Yes. Jeffrey is a cultural historian whose interests range from antiquity to the present. He is also a pioneer in the digital humanities and his works range from books to curatorial practice and beyond. The emphasis of Ian's talk today will be on autonomous vehicles as a test case and the dialogue should be really interesting because Jeffrey just completed teaching a course on robots in the built environment here at Harvard. Almost like I was thinking about it. I also learned recently that Ian has an unusual perspective on what constitutes a sandwich which might include a head of lettuce and this might be interesting to those in the Berkman community because we've been talking about sandwiches recently. Yeah, yeah, just everything's a sandwich. Exactly. Let's just get it over with. So with that, thank you for being here and we'll make sure we save some time at the end for questions. Great. Okay, I am so happy to be here. I just flew in, so you welcome me with this nice rainy, rainy weather which I'll forgive you for. When I was a first year undergraduate philosophy student I had this very stern and severe Scottish instructor and he was kind of against everything, it seemed. And we were talking about Kant's moral philosophy, of course, the categorical imperative and the idea of the categorical imperative is that according to Kant, one should act on a maxim only if one can imagine it becoming a universal law. This is sort of the one-liner in Kant's moral philosophy and philosophers are kind of trolls, kind of the original academic trolls, right? Nothing makes them happy. They're very grouchy about everything. That's all about sort of sneers and barbs and daggers in the side. And so this instructor had this counterpoint, this sort of reductio ad absurdum to Kant's moral philosophy which apparently I've not forgotten and will never forget which was, okay, well if you should act on every maxim as if it should become a universal law then this should work for every maxim according to Kant. Anything that you can suppose should be testable for its moral quality against this premise. So what about I will play in order to exercise and keep myself physically fit? I will play tennis in the mornings at 10 o'clock a.m. So if you run that scenario and you say, well imagine if everyone played tennis at 10 o'clock in the morning in order to keep physically fit then it breaks down because everyone would crowd the tennis courts, no one could play tennis at all. Therefore, you must not play tennis at 10 o'clock a.m. which of course seems preposterous. So this is one example of one way of thinking about these ideas that doesn't seem quite right but is insightful in the sense that it shows that there are these reversals, these kind of edge cases at their negative ends or at their extremes, things change. They change form and so something that's sort of thinkable in a reasonable way at the center once it moves to the edges and then once the center moves to that edge then it alters somewhat, it changes. Now Kant's maybe not the best tool to think about this but Marshall McLuhan and his son Eric in this book that no one reads unfortunately called Laws of Media have this interesting media philosophy of these four laws which you see here enhancement, retrieval, reversal and obsolescence and we don't have time to go into all of this but what's interesting about it for the present conversation is that for the McLuhan's this idea of reversal is kind of like a property of media. It's not something that happens later when things go wrong. It's an intrinsic property that all four of these laws are kind of active on media objects and of course for McLuhan everything is kind of a media object. That's the electric light bulb famously and so forth and they run these scenarios in this book of these tetrads, this is the cigarette which enhances calm and retrieves group security. You can all go together and smoke and the reversal, the thing that happens when the cigarette is pushed to its extremes or its limits is it becomes this addiction. Your nervousness, you're no longer calm because now you want the cigarette that you can't have. There are dozens of these in this book. This is the Xerox machine. I guess the interesting thing about the Xerox is the reversal scenario for the McLuhan's is that everybody becomes a publisher which is something we used to talk about in a very positive way and now we're not quite so sure about any longer or maybe this example, this is the car, there's all sorts of interesting and bizarre things going on here and we again won't take the time to unpack them all but the knight in shining armor is the retrieved medium. This idea like there's something from the past that comes to the surface in the present. So I guess this, you can now get out of any situation. Certainly this is what car share services are like now. This is obvious that the car when pushed to its limits, when everyone has a car, everyone is in their car at once, then you get traffic which is the opposite of mobility and it's interesting that freedom and mobility are not the thing that the McLuhan's identified with privacy but it's okay, this is just a tool and makes perfect sense and so I give you this example in particular, not only because we're gonna talk about autonomous vehicles a little bit or I'm gonna riff on that a little bit but also because you can see how it just makes sense that it's not that there's something wrong when you get traffic, there's something intrinsic to the design object that is the automobile in its urban context that is traffic, that's part of what the car is and that's the insight that the McLuhan's had in this tool. Now the pessimism business was a little bit of a, I don't know, I mean I am I think a natural pessimist but in this moment that we're in today with technology where I think shifting finally into a mode where it's possible to be critical without getting sneered at, if we kind of look back at the, I don't know, the optimistic aspirationalism that we've been using to encounter technology in the broadest sense and we look back on those moments of the recent past or even the distant past, we can see how things, how we knew how things were going to turn out actually, we just weren't paying them heed. So, you know, we kind of knew 25 years ago that the notion of identity and anonymity online was trouble, we knew that and we just made jokes about how that's funny, let's move on, right? But that turned out to be an intrinsic part for good and for ill, right? Or this is from about, well it's 2006, you can see back when we were celebrating how blogging was going to do away with these gatekeepers that were keeping people out of sharing and spreading ideas and that was just terrific. And you know, okay, well what happens? I just think about it for like 10 seconds. What happens when any idea can be shared, it can't be distinguished from any other idea? Well, you have no quality control and no ability to discern which ideas are sort of even, not just desirable, but even true. We knew this and we just kind of went headlong into it thinking, well this is great, nothing can go wrong. Or with these devices, which I've previously called the cigarette of the 21st century, these guys. The relationship, I feel mind-buzzing in my pocket literally right now, I'm talking. The relationship we have with our smartphones universally, we knew that we had these pagers in the Blackberry which was this evolution of the pager into email and so forth that that role of the important person, the doctor or then the executive or the governmental worker would have you, who had to be connected to what was happening, that when that universalized, we would all be working essentially laboring all the time which is what we're doing, we're not always laboring for our workplaces, often it's for wealthy technology companies or for our own personal brands or whatnot or what have you. So you know, and like now we're kind of going, oh shit, we know now that some of the ways that, even for those who were involved in creating these infrastructures, they're kind of admitting now, yeah this was, we weren't thinking even a little bit about the implications of what we were making and that's a nice conclusion to come to after you've made a boatload of money and it might not matter so much anymore. So I've been thinking about this whole question, this whole sort of set of ideas in the context of autonomous vehicles and I picked them partly because I'm legitimately interested and partly because they are so new that we've not yet made these errors with them. We haven't committed in any, either at the design level or urban planning level, at a personal use level, we have some runway, some blue sky to work with but when you look at the way even now that we're kind of talking about this future, it's either as this sort of like wonderful, finally we'll be able to rid ourselves of these awful machines that we despise and we can, it'll just be easy to get anywhere you wanna go, you won't have to maintain a paper car, that's one of those scenarios and another one of these sort of, I'm really looking forward to self-loathing cars, this is a good comic, you know. Or the last step in the milestone you can't see is cars capable of arguing about the trolley problem on Facebook, this is funny, this is, but it's also a signal that we kind of, it's a kind of cursory take on what these futures of autonomous cars might look like. So I've been thinking about this for a little while, we don't have to just talk about autonomous vehicles but I think it's an interesting test case and I've been running through a number of sort of, you know, likely scenarios in this kind of McLuhan highway, if we take this thing and we imagine pushing it to its extreme so that it really is universal, what happens, what takes place and I'm just gonna run through some notes of some of the things that seem likely to me at universal scale, it sort of went fully rolled out. One of those is that all of the trolley problem business, like is the car gonna run down pedestrians is a very, very temporary problem that has to do with this transition mode between human driven cars and pedestrians and bicycles and so forth and fully autonomous cars. Once you get vehicles that are fully autonomous and once you roll them out completely then one of the likely things that will happen is that the way that roads operate will also change. So you can pack autonomous vehicles much closer together. You have that sort of like minority report vision if you remember the tracks of Lexus branded vehicles that are sort of swapping at very high speeds, swapping places with one another. These cars can coordinate with one another and so it's not so much that it will be undesirable or no longer a technical problem that people or other people or other humans driving traditional vehicles might be at risk but it will no longer be feasible for them to even participate in that mode of conveyance. To the point that it strikes me as likely, not just possible but likely, that especially major arterials will become the sort of new freeways that will be inaccessible to not just human drivers but as right of ways whatsoever for pedestrians, for cyclists. And they'll do that because it will no longer be safe to interact with the way that the autonomous cars are behaving. Another thing that will change is if you, I was recently in Tempe where Uber is running one of their test markets for these autonomous cars and they have people in them still but even so you realize that even if you were on the road with them, your relationship with the driver in the vehicle as a pedestrian, as a cyclist, as another driver is very important. You kind of know, okay, I kind of have a sense of what you might do even if I can't see your eyes because I know the possibility space of things that people do but we don't understand what computers do anymore most of the time and the programmers of these systems often don't understand how they work when we get into these deep AIML kinds of systems and that's exactly what autonomous cars are. So even if you are in the same space as one of these vehicles, you'll have no idea what its capacitors are and what it might do so you can no longer read those apparatuses, they're no longer legible. So if you kind of run that scenario out, maybe it would just be better to take this thing that we've had since we've maintained public roads which is the idea of the public right of way that all of us can go out to the street and use the street and it's maintained and owned by a governmental entity, by a municipality, by a county, what have you, by a state and they're responsible for it and as a result, everyone has the capability of using it. Maybe that doesn't make so much sense anymore and in fact, it's become very expensive to maintain American infrastructure as we all know and it's kind of falling apart and so when you have wealthy technology companies that are absolutely gonna roll out autonomous vehicles as car services, kind of Uber-style car services, not as conveyances that you would buy and own in garage yourself, then just in the same way that Amazon is essentially, you're getting these unbelievable bribes out of municipalities that want to host its new headquarters, you know, maybe it's best just to lease off those spaces to Google, to Tesla, to Uber, to whoever those players are in order that they can kind of manage them and upgrade them to smart roads so that they can make them even more efficient and this might mean that you would, I mean, it'll start with the largest streets but then it'll certainly bleed into smaller ones and maybe there'll be times when you can't use your own road, like imagine you kind of walk out of your house and you no longer have access or at least not direct public access to that space. You can even imagine a sort of blockchain-driven, smart contracts kind of system where you've got your phone in your pocket, right, and you wanna cross the street. This is like Philip K. Dick stuff, right? You would just wanna cross the street and it's fine for you to cross the street at certain, as long as there's no vehicles in the area, it'll just be charged a small fee invisibly when you enter into that because it's private property now or at least it's leased off in such a way that it's construed as private property. Once that takes place, you know, when you don't need things like traffic lights, for example, because those are managing human-driven vehicles and these autonomous fleets are much more efficient, just yank out the traffic lights and they will invisibly coordinate their behavior with one another. Well, when you take those sorts of things out, one of the things that comes along with them are the wayfinding devices, street signs, street names, those are all put up for our benefit as human drivers, cyclists, pedestrians, and so forth. And they're quite unsightly when you think about it. Who likes to look at traffic lights or street signs? So let's just, maybe let's just remove those as a kind of urban renewal program that could be underwritten by a company like Google. And, you know, they would have a secondary interest in doing so because as it happens, as you might remember, Google provides mapping services to all of us. Now we don't use paper maps anymore and in fact there's kind of a long history of obfuscating public space with maps that sort of are false or slightly inaccurate in order that you can control what people know. The Soviet Union has a number of examples of this that I don't have time to go into right now. But if that's the case, then, you know, the idea that we have public access or sort of general access to maps, that might also begin to dissipate. You know, maybe there's a service level kind of subscription to a kind of radius from yourself that you can see or maybe it's just in the interest of these new public-private partnerships between municipalities and Uber and Google and so forth to just eliminate a citizen use of maps because all it does is cause trouble, people go places they shouldn't. I'm not even talking about like the obvious amplification of kind of the history of redlining and other sorts of geographic disparities that have, we're already seeing impacts in the way that ordinary car services work in terms of access. We could kind of go on down this road. And there's a bunch of other interesting scenarios. Parking, a lot of folks have started talking about the delight that will come from the removal of parking lots, which are their own blight paved over spaces. And, you know, that's certainly likely, but it's already happening with flat surface-level parking lots, especially in dense urban centers. Those are being bought up and turned into tall, you know, luxury office and condo towers, mostly, right? They're not mixed-use spaces, really. Now you have sort of expensive condos and office space for companies that want to move back into the centers of cities after having spent decades on their edges. The parking lots, though, that exist infrastructurally that are sort of at the bottom level sort of underground large buildings, it's not like those are gonna go away. What might happen to those? They might become staging areas for these fleet cars. That's one thing that's been proposed. But another thing that strikes me is that, I don't think, I think there's so much more space than you would ever need to stage for autonomous car service, that, you know, how might you repurpose parking decks and underground parking structures? They could sort of become a new housing because housing is very expensive. And we have all of these workers who want to live in the center of cities now if there are still jobs for them after automation. But even if there aren't, we have some signals that this is already happening. Just on the way up, I read this article. In San Francisco, there are about 1,000 new apartments that are being generated out of old boiler rooms and basements. So it's about 200 square feet, which is perfectly acceptable. The hotel room isn't much bigger than that. At a bargain price of only $2,400 a month in San Francisco on markets. You can imagine sort of creating these sort of underground slums for workers. And this would be a benefit, really. You wouldn't have to go out to the suburbs because the suburbs are likely to become completely inaccessible, I think. We will, as we see more folks move in and densify urban cores, the cars won't even be necessary anymore. We'll have new pedestrian and bike corridors. And if you think about the way that Amazon has redeveloped Seattle, the sort of Southlake Union area of Seattle, and that sort of thing seems increasingly likely. So if you're wealthy enough to live in the city center, you probably won't be bothered by autonomous cars at all. And maybe we'll see a shift into these autonomous buses that ship people back out to the suburbs and the ex-urbs. And then once you're out there, if you don't have a car anymore, you're completely screwed. What are you gonna do? So you'll be kind of under house arrest in those spaces. Or maybe we'll develop these kind of like illegal human-driven taxi services that will crop up. And that's not even to mention what happens to folks in rural areas once they can't get access to electric or internal combustion engine vehicles if they're taken offline. I also think about garages, about all of the in town, not suburban, but kind of urban single family garages that exist all over America, which would no longer be necessary really. You're not gonna own these vehicles. And so if you're lucky enough already to own a property like that, then you've got to kind of built in Airbnb. So it's like this kind of mass conversion. And there will obviously be a kind of classist relationship with people who are renting out these converted garage spaces. Not to mention the fact that it only amplifies kind of existing wealth inequality as we've built so much of our wealth in America for the every person around property ownership. Anyway, there's like dozens of these kinds of scenarios that we could spin out. I may be right or wrong. It doesn't really matter in some ways. It's rather that if we sort of shift from thinking about the technology and the near term problems to these kind of medium to long term scenarios that assume adoption at a universal scale and then run just to ask questions about them, then that sort of scenario, there's some relationship to science fiction, some relationship to like Rand style scenario planning, some relationship to other kinds of futurism, but I think it's still distinct from them because it's asking questions about like, what is the current technology going to be when it flips its bit, when it reverses? Because that could still be changed as we're working in the present. So those are some thoughts. That's sort of what I came with. Yeah, no, I mean, I think that's a great launchpad for this conversation. And I guess I'd be interested in starting out the discussion part of this. And Ian, I'd agreed that we early on very much wanna involve the whole audience here as part of this conversation. But I thought Ian, maybe as a prompt since you introduced content to the conversation right from the get go. Sorry for that. And your own training has this really rich and interesting sort of crossover between philosophy, theory, critique, and practice and make out. That if maybe we can talk a little bit about pessimism as a stance, because of course in these various tetrads that you wonderfully brought up from the McLean era of media theory, pessimism itself is often not pessimistic. Rather it is an intervention in an emerging set of debates, of concerns, of forces that run in different directions. I mean, certainly for Nietzsche, of pessimism and for a whole strand of philosophical critique. Pessimism is the corrective. And in the case of autonomy, and I think you wonderfully spun out some of the potential ramifications of an autonomization of the world, but of course the question in the word autonomy that coming from a philosophical background what would immediately introduce is of course autonomy for whom? Like who gets to be autonomous in the service of what values? I mean, all of these various scenarios that you described from these worker colonies or maybe encampments in the excerpts of disenfranchised populations, they all raised this question. Who gets to be the driver in the place of the driver, so to speak, whether it's the level of social forces or the architects of cities or who owns the public spaces, what is a public space? So I guess I'm just curious, given the extraordinary range of your own work, how you see this kind of critical intervention in shaping that future conversation about the design of cities, because we're just at the beginning of that. I mean, I think as you suggested, this is a little bit different than some of the other cases that you started with. So one thought I have about it is that when you bring, when we think about the interaction between technologists and philosophers, it's a sort of smarmy conversation we have about that interaction, right? Like, oh, yeah, we're gonna bring you, the humanities are still important, we'll bring in these philosophers who help leaders. It's all, it's like, oh, really? That's all we can muster? Is this sort of smarmy appeal to ethics, which isn't to say that it's a bad thing to think about the moral implications, but I actually think it's a mistake. It's almost a category error to take these kinds of scenarios as just moral implications. They are in some ways metaphysical implications, right? They're like ontological implications. We made this thing, we made this thing, it was blogs, it was the internet, it was smartphones, it was autonomous vehicles, whatever it is. And we had the, everyone has the best intentions, or at least something that's not the worst intentions. They had some good intentions. And then things got away from us, right? It took on a life of its own. And the best we seem to, the best conclusion we seem to come to once those outcomes are unexpected is, oh well, this just once again proves that technology is neither positive nor negative nor neutral, right? Okay, great. So, and then we're just left with the results. And then we just kind of like move on to the next thing, as though nothing happened. I'm just gonna wash my hands of it. So, you know, so to me, this is one way of getting at that answer is that, you know, this is the design is the space that sits between technology and philosophy. And unfortunately, design has also sort of been troubled in the, in recent years is the sort of design thinking nonsense has taken over all conversations. You know, what does it mean? Well, it means just basically, it means speculative finance, right? Design thinking is kind of speculative finance, like technology kind of speculative finance. The philosophers haven't yet gotten around to casting their work as speculative finance. And so, That'll come though. That maybe probably won't come. Anyway, so design is this space where, where we muster abstraction and make it concrete. And then it gets pushed out into the world through implementation. So I'm interested in that community or that mode of thinking as one where you could begin asking questions instead of about use or about outcomes or about these sort of moral or social implications, all of these kind of like smart mini frames that we draw around things. And that frankly, the folks who are making these technologies aren't that interested in hearing and transform that into questions about the essence of these products or services or objects. So I mean, just to jump in though, in the case of like your work on game design, for example, the use of interactive game platforms is spaces of critique or critical engagement of some form. Would you see an analogous extension out into the sphere of sort of getting under the hood and make or tweak or hack technologies? Well, there was this, when I started working in games, and I did all this work with kind of games and politics and education. And I had this whole argument, sort of like 150,000 word-long army to build a whole game studio around the idea that we could take the way that things behave, there are these systems of behavior, these complex systems of behavior in the world. And because we have the capacity with software systems like games to depict those systems systemically in representational form, that we would be able to understand them, critique them, maybe make alterations or claims about them more easily. And that works in theory on paper. What I didn't think about at the time, 10 plus years ago, 15 years ago when I was working on this is that those media objects and that whole design philosophy exist in the media ecosystem with everything else. So if you zoom back and you kind of imagine, okay, it's not just that we're kind of making these representations of how things work rather than depicting and describing them, but then we are also trying to alter the media landscape such that people are looking for understanding and conversing about those kinds of systems. That's what would be necessary. And of course, that's not what happened at all. We don't talk about like software model, like you don't wake up in the morning and open your phone and look at the latest software model depiction of the current state of like climate or politics, but you read text, you look at images, you watch videos, you listen to audio. It's just the 20th century, 20th century forever. And so the two lessons I would draw and answer your question is that on the one hand I was, it's the same interest, right? This idea that there's sort of deep structure in things, that there, essence is very unpopular, right? No one likes to talk about it. They like to talk about transformation and change and becoming, but no, there's something about essence, about deep structure that seems endemic to grasping something, which is one of the reasons why the McBoohins are of interest to me. But then also that I made that very category error that I'm talking about here today, which is that I thought that this was a design problem that was unrelated to other design problems in the media, or even not even design problems, just sort of trends and flows. And now that, I mean, I really do believe that that opportunity is, that timeline has been snipped. We don't know what it would be like to go down it any longer. Great, well, I'm gonna open up the floor here for just people to jump in. Daniel, are you going around with Mike's? Yeah, so if you would like to join the conversation, just raise your hand and Daniel will come over. Great talk, by the way. I got a couple of questions where I'm gonna ask one of them and then you can tell me where it goes. First, I mean, is it pessimism or is it creative destruction? I mean, that's an economic term as against a philosophical, right, economic philosophy. Yeah, well, I mean, so the economic position is that it doesn't matter what happens so long as there continues to be musterable productivity that it can be. So long as there can be an economic machine that continues running. Whereas pessimism says things are bad and they're getting worse. So if the way that you measure goodness is through economic value, then so long as economic value continues to increase and so long as it increases for the agents for whom you think it's important that it increase, then you're fine. It's all good. There's only optimism. And you know, it's arguable that this position is the strongest one, right? That even the optimism, pessimism, like dyad, is just a foil for a true interest in continued economic productivity for a selectively smaller and smaller group. And you know, it's not so, I don't think we can just dismiss that idea and say, well, obviously we don't wanna go down that road because in fact, that's the road we've been on for a long time. But if you take, the interesting thing about the pessimist is like a sort of figure two embody, right? Is that it's this, you have a, it's like putting on a hat, this is, I'm just gonna ask what's the worst case? You know, what's the worst possible scenario? Not because you're some sort of a masochist or really a pessimist in the pessimist sense, everything is going to hell, but rather that posing that question, even from the vantage point of economic development, right? Would allow you to see possible scenarios that you would otherwise miss. The interesting thing about the way that technology has been proceeding, even on the economic register, is that without asking any questions whatsoever, it seems to be working out, right? Like through accident rather than this sort of creative destruction kind of stuff, through mostly through dumb luck, and then the kind of amplification of those scenarios, especially as internet-based services have globalized and those sort of re-amplify the difficulty of finding new answers, of intervening in these systems. Like, so I'll give you one example, which is probably on people's minds lately, which is this net neutrality conversation, right? So, I mean, I don't even know if I'm gonna touch this subject in this room. It's dangerous. Well, look, common carriage makes sense, it makes sense for broadband and wireless data to be treated as common carriage. But at the same time, the internet is kind of garbage and something that might change it in any way is worth at least talking about, at least talking about, right? But you can't even really do that. You can't even say, well, let's just like step back and then you get yelled at on Twitter or whatever by the throngs of, I don't even know what their position is. Like if there's lefties, there's sort of these kind of centrist libertarians that are the same, it's kind of all over the map. So we've worked ourselves into a corner with a lot of these questions where we can't even really pose interesting questions about them. And one of the reasons we can't is because we're stuck within these systems that we're supposedly preserving, by means of taking on an obvious position like we wanna make sure at all costs that net neutrality doesn't deserve the sanctity of the internet, which we also believe is garbage and now we have convincing evidence has had real negative implications on civic life and so forth. So yeah, I think that's the interesting thing about wearing the pessimist hat. It's a license to sort of say, okay, what is awful or what might go wrong? And let me at least think about that for five minutes. Even if then I'm gonna sort of shed it, take it off and come back to reality. And just one follow-up thought experiment, right? So if we were sitting, I'm in Buffalo, New York, where I work and we had an analogous event, right? 150 years ago, we tried to build the Erie Canal, which was kind of like a huge evolution leap. We were trying to build a canal that would make boats go from Buffalo to New York City. But then as they started developing this high-tech event, infrastructure project, trains came in, killed the canal. Just when the trains got done, the highways came in, killed the trains. We can see the same, you know. And now it happens a lot faster. It happens with cable TV as network TV, right? We see that evolutionary process. And I wonder if we were sitting in a room back then, would we be pessimists fighting that process with that same view? Is it just history repeating itself with a new set of technologies? That's the only other question. Hello, Sarah had mentioned going in that I would really enjoy the stock and she was right. And this is all I think about all the time. Something that really struck out to me as earlier when you were talking about the blog example where there's just so much promise and you talk about how much excitement there is around it and I'm a technologist. I've worked at large tech companies and when I think of every product launch, it's just like people are on stage talking about how cool it is that at any point in time you can tell someone it takes 15 minutes to get home. And you don't really think about the kind of data that takes or what that means for someone's privacy, et cetera. And you mentioned also that when people build stuff they have good intentions. And if anything, when I'm around here I actually often hear the narrative that the Bay Area and Silicon Valley is only profit hungry and everybody cares about money and they actually don't have good intentions and they're evil. And so my question is as I think about how to really shift the tides of my field, is it that people are profit hungry and evil and that's like the real narrative or is everyone actually just like way too optimistic and only wants good things and that's the blind side and which side do you think it really, I'm wrong? And then how do we like change it? The short answer I would give you is I think the vast majority people are blind optimists. They're not power or wealth hungry extractionists or something. There are some of those. And one of the interesting features of the tech elite is that it's a particularly odious kind of power and wealth hunger, not because it's different from other kinds of business or from finance, which is I think the ultimate sort of reference point, but because it's dishonest about the power and wealth hunger. You talk to a hedgy and they're not gonna be like, I'm trying to change the world, right? It's like straight up about it, right? Whereas you talk to a tech VC or CEO and they will feed you that line, even whether it's true or false. And you look at the behavior of a company like Uber and it's pretty good, it's not all companies. But then the folks who are the line workers basically, they really are, well first they're just trying to make a living. And these are basically middle class jobs at this point in the sectors where tech is flourishing. And also they have the best intentions. They do. So they're also on the ground and there's a certain amount of power that they have. But also I think that that sort of whole modality of optimism, whether it's truthful or false, right? Has so, we're just drunk on it. Like nothing can go wrong. And now that we have evidence that actually things kind of can go wrong, we weren't just kidding ourselves about that. There's an opening to say, okay, like what are the, if we can stop in our kind of daily, in our pragmatic level, the daily, weekly processes of building these products and services and start asking, okay, so what happens? We're gonna roll this little test out of this product. What happens when everyone in the world is using it? What does that look like? And then do we want to sort of backtrack from it at a design level? You could also introduce, people talk about regulation and other forms of external control as being important and they are. I think that's another missing bit to this. But we've also kind of gone off the rails with regulatory management of everything. So it's a pipe dream to think that that will suddenly come online. Although it is interesting that the one thing that seems to have revitalized itself in the Trump era is corporate antitrust, which the eight years of Obama, the kind of cool dad social media president, and none of that was happening. So in other words, I think that a purely regulatory answer is probably not gonna come about. And so unless we get inside the ordinary everyday worker, oh, we have no hope of averting disasters of the future. But just to jump in on your question, isn't also one of the expressions of pessimism better design practice, a more widely informed? Yeah, you know, also like a slower design practice. I mean, this business of speed is not just a matter of the increasing, a speed of change in business and culture. It's also the speed of product and service development and deployment. And we've celebrated that for a long time and it allows us to do these experiments and make these changes. And we feel like we're not hurting anyone in so doing, but it's clear that actually, no, we are hurting people in so doing. And how do you dampen that? One of my hobby horses that's a little bit orthogonal to this talk, but it's still relevant, is the folks in computing call themselves software engineers, but they've never adopted the orientation of civil service that the engineering professions did through professional engineering certification, but also through just a kind of professional ethos, which is not that different from the way like journalists think about their work. And so it doesn't, you know, it doesn't all have to come from outside or from sort of like tight regulatory control, but yeah, slowing things down might also help. We have another question over here. Hi, thank you for that glorious talk. And I really do think it was glorious, but I want to challenge its label as pessimism because what I hear is an optimism that there will still be a civilization that will be making progress for at least somebody or some small group. And, you know, you mentioned Philip Dick. And, you know, when I think autonomous vehicles in the current structure, I think I go full, you know, Philip Dick and think of, you know, fleets of abandon or packs of abandoned autonomous vehicles, you know, wandering, you know, abandoned hulks of cities, you know, as the rest of us are going all, you know, Mad Max or The Walking Dead trying to reinvent how you make bullets or something. Right. So I guess my question is, why are you such an optimist? No, you're totally right. The pessimism sales pitch was just a lie to get you to come. Actually, I had a similar question, which was, you know, your net neutrality example made me think that, you know, you could frame being pro-net neutrality as a lack of pessimism about what neutrality means or a lack of optimism about what deregulation could lead to. So I'm wondering why you choose to frame it as pessimism rather than skepticism, just challenging your beliefs, whatever they are. And I wonder, is that because you think that in the realm of technology, we have an inherent bias towards being more willing to believe our positive self-deception than our negative self-deception? That's a good question. I don't know, I have to think about that, and I will think about that many times in the near future. I think my gut reaction is that for many years, pessimism was off the table. The moment you started making critical comments about contemporary technology, you were either a Luddite, you were just an obstructionist, you had your blinkered, you didn't understand, and maybe the only good thing that's happened in the last year or two is that that preconception has been stripped away. And now, okay, no, maybe we ought to be more critical, but I don't think like being critical or skepticism, it's too modulated, it's too modest and moderate, and we need a counterpoint to that extreme optimism that we've suffered under for so long, so maybe if we go full pessimism for a while, knowing that it's extreme, that it's too much, then we can sort of find some reasonable space in the middle, and this is maybe not that different from any sort of polarity that we might be experiencing today in politics and in social issues, where the moment that you try to modulate in the middle, you actually end up just being pulled to whatever extreme is acting in the most extreme way. So like it or not, we have to respond to that maybe excessively. So you mentioned at one point snipping off lines. It feels to me like we're right now living in an edge effect time, and how do you get out of that? Yeah, one of the amazing things about The Arrow of Time is that we don't know what the alternatives might be. And so traditionally science fiction, speculative fiction, or there's these sort of speculative design concepts that borrow from that premise, but for built objects or the built environment. One of the interesting things about those traditions is that they ask questions about what could be, but typically it's allegorical. It's actually about the present, whereas it could also be about lost, there's sort of historical fiction or other ways of thinking about lost presence from alternate futures of our actual past, right? And then there are the alternate futures of our actual presence, which is not what traditionally what science fiction does. And so if you muster those objects or those traditions or trends, whatever modes as tools deliberately, in a way that doesn't like, I don't know, throw them into the cultural abyss of sci-fi, which is a problem, or simply kind of turn them back into these allegories of the present, couched as the future, then I think that's one possible tactic. It's certainly not the only one and it probably is insufficient, but it's one that I think about a lot, that if we can just sort of open our eyes to this string theoretical impossibility of all of the possible futures that we right now sit at the intersection of, then we, and we can think of them as possible actual futures, but then we could design toward them rather than just like, I don't know, we'll just do whatever, we'll just do whatever, you know? Whatever happens is fine, because we did it, and then we meant to, and then you kind of tell the story of how you really meant to, you know, then that kind of planning, we'll look like planning at that point, right? Just a follow-up. So I read alternate history a lot, I look at that, I think about Kim Stanley Robinson has some great climate future histories. I can think that way, but how do we get the whole electorate to think that way? Yeah, when you're up there. And that's like our big problem right now, it doesn't. Yeah, I mean, you know, the whole electorate is probably not a good target market for much of anything. Well, I mean, you know, if think about where change happens. It doesn't happen from the will of the people, even though they often get to vote, at least in theory, on these things. It happens at nodes of power and influence. And so if we can change those, then we might actually have more, we might have more influence on that collective rather than going to them at the grassroots. So following up on that a little bit, Sarah Watson, I've been thinking a lot about the kind of trajectory of how these pessimistic or critical conversations have been happening, but also how they've changed over time. And I'm wondering, you know, even over the last two years, you know, thinking about the worst case scenario, we, a lot, plenty of people have talked about like the worst case of Facebook or, you know, kind of that, and yet nothing happened or nothing was possible to happen until a real worst case scenario actually happened and Russian interference being, you know, one of these. Yeah, we were talking about exactly this thing in the last election cycle. Right, but like it took the worst case thing to actually happen to, for anything, for a scale of people to actually care and for people to actually respond and change things. So to that end, like where is that line and what is the effect or how do you think about influence when, you know, it takes that kind of worst case example? Yeah, I mean, so either we're, you know, what are the possibilities? We're idiots. We were like, we were unpersuasive. It was not important. It was too seductive and no one could see the alternatives because they couldn't feel them. They were abstract. It's possible that even though it seemed, like people are just very bad at future planning and so it seemed even plausible, but that plausibility didn't seem near enough in time to be actionable. And I'm sure there are dozens of other possible cases that we could run, but now we've made, like that's done. And, you know, these sort of small trim tab adjustments that Facebook, for example, is making are probably not that important. So, you know, so giving up on that and moving on to something else is one possible answer. I guess what I'm trying to say is that we have to start acting incredibly tactically. And that is not something that, for example, the political left in America or the sort of technology friendly counter cyber libertarian community is very good at doing. It's just all idealism, you know? And so moving back into tactics, the sort of like very pragmatic real politic of this might be one answer. Well that, and that gets to the question of audience, right, like to what, to which audiences are you actually forming these interventions or reframings or whatever? Right, right. You have, you know, even if you, like let's see where we embrace just buying, just buying the solutions, right? So we, you know, we need a sort of, you know, Koch brothers for the left or something. And, you know, there are plenty of billionaires who are empathetic to this, but they are not going about their influence using their money for influence in the same way. Right, it's not as aggressive. Like it's not that I'm gonna, I don't know how you convince folks like that to do so. But instead they like buy media companies, right, and have like, you know, kind of like hobby newspapers or magazines or something like that, right? I'm curious, Ian, in the context of answering Sarah's question, you brought up the notion of persuasion as a kind of core problem. Like where and how, you know, of course persuasion is the object of rhetoric, which is the most venerable theory of communication in certainly Western cultural tradition. And in your gaming work, persuasion has also been a key issue. I guess I'm wondering with respect to the question that Sarah was asking also, the prior question, you know, where and how does persuasion happen and become efficacious in a kind of media space, media ecology, like the one we inhabit today? I mean, so the good rhetorician, you know, whether it's a Berkean or a nurse, has some understanding and respect for their audience and acknowledges that audience. And that may be the biggest missing bit if I had to pick one. Yeah. And, you know, there's reasons for it, but without sort of understanding and coming to, you know, it's not meeting them halfway, it's like meeting that audience almost all the way, maybe even more than all the way in order to then make an appeal of some kind. So in some ways these systems sort of reinforce the bad habits that draw us further and further away. Like one of the things we talk about a lot at the Atlantic and in media in general these days is the sort of problem of the coastal media elite, that the fake news media environment that Trump and others have successfully antagonized, which was and remains an actual problem. You know, you live in New York or D.C. or San Francisco or Los Angeles or wherever it is that media gets made and then occasionally you drop ship a couple folks into Ohio to do some sort of, you know, it's almost like this sort of like colonialist affair, right? Look at the strange behavior of middle America, right? The natives. So, you know, people are reacting negatively to that for a reason, it's continuing. I mean, the New York Times is particularly expert at this even in light of everything. So that's just one example, but I think maybe that's the most important bit. And I'm not, I don't feel like I'm good at this yet. And so I feel sensitive about calling it out as a bad habit, but maybe that's the big one. Like what are people encountering experience? One reason we miss the Facebook stuff is people love Facebook. They love it. People love Google too, it allows them to do things that feel magical and that give them immediate and enormous value. Well, yeah, I mean, if we run these scenarios on the immediate past, we could probably in hindsight come up with some likely scenarios that might have averted certain kinds of effects that we might construe as negative and that others might not. But I don't know if that's where we wanna spend our time. It's an interesting affair. Maybe, you know, someone should be involved in thinking through that as a way to move us into the present. I'm not trying to be a historical here, even though it's hard to even call this, we're talking about like two years ago. Exactly. What counts as history? But you know, the urgency, because of this speed business, the urgency of the near future suggests that maybe we don't need to answer that question. Where's the mic? Wait, is this on? Yeah, it's on. Moira Weigel, thanks so much for your talk. I wanted to ask, following up on this stuff about politics and tactics, what is the significance of that split you alluded to between VCs and management and engineers and sort of rank and file tech workers? Because it seems to me that that's become, I think, both because of, as a result of the election and the sort of tech CEO's meeting with Trump after the election, and the increased material pressures of living in a place like the Bay Area, the reality that for the most part, tech workers are labor and not capital or whatever. You know, the idea that they're not, that most tech workers will never be VCs. Yeah, well they're not capital owners. Even though they might appear to be because they have stock options or whatever. And so I wanted to ask, in terms of politics and tactics and strategy, what do you see as, you know, in terms of that civic responsibility or resisting the Philip K. Dick future? Like what's that significance of that split? Yeah, it's, I guess the observations I have is that when outsiders make this claim, whether it's journalists or scholars or sort of, or even folks who are sort of, you know, inside of the tech world, but not necessarily in a kind of, what I'm calling a line worker capacity to kind of emphasize that. They don't seem credible for some reason, right? And you, and also that crop of labor is very inaccessible. They make themselves inaccessible and they're very tightly controlled by their organizations. It would actually be really hard to do a sort of on the ground investigative report of worker life at Google. And there's all sorts of reasons for it. So when we do see little bits and pieces of it, it's usually through financial or business news. Just this week, in fact, there was an interesting kind of a little kind of exhaust emitted of the world that you're drawing attention to where Uber workers who hold options are trying to unload them in order that they can turn them liquid, but unless there's a certain amount, because it's all going through like soft bank or something, then there's this option to do a certain amount of deal. And then you have to also be registered in the right way to be able to transact. And there's all these kind of strange secondary markets for financial instruments these days. And so that's where that reaches the surface. It's all about money. And no one like, how do you sell that to, okay, you're living on $250,000 a year in San Francisco and you have all these stock options? You want me to empathize with that? So that's, and this is back to the business of audience actually. So for the tech laborer to appear as a laborer like any other kind of laborer does, something will have to bring them together as a group and create a kind of equivalence between their plight and the plight of quote unquote ordinary people, right? I don't know how I would go about doing that, but again, it's a tactical question rather than a sort of question of ideals. I hear these stories all the time. People talk to me privately about them constantly. And there is a group at least in San Francisco that has a chapter here that like helped with the Facebook cafeteria workers unionizing and stuff. Yeah, but it's always those kind of workers, right? And when they go to the contract workers, they all get fired, I think. Well, people know about the Filipino scrubbing data and people know about the cafeteria workers. We've seen good stuff. And that's because this sort of New York Times effect too. Like that's the story that appears to be bringing the plight of the downtrodden to light. But it's just unseemly to say, well, yeah, I mean, there are these six-figure earning knowledge workers who are also the downtrodden. That story is just not going to fly. How do we tell that story in a way that will? One last question. This may go back a little bit before you were born, but in 67, a Harvard economist, John Kenneth Galbraith published a book called The New Industrial State. And it's a little simplification, but his main thesis was that perhaps the markets aren't so good allocators of capital and you should have some kind of local deity that says, basically, where will capital go? What would be developed? And Galbraith was around 6'6", very tall. He looked around and couldn't find any local deity but himself, so he worked out. I'm an MIT economist, so I'm not going to say that. It's true. So OK, my question is then, given your observations and thinking even of plowshares, what we talked about when nuclear power was invented and how it turned the swords into plowshares, something for good, how do we proceed to get the good out of technology and stop the evils? I did this piece for the Atlantic a few months back about this kind of unassuming New Zealander guy who lives mostly on a boat and he's in and out of network access. He's disconnected from the normal infrastructure, not to mention that down in that part of the world, they're already disconnected. It's already difficult to get data of any kind in Australia and New Zealand. One of the great observations he made to me is there's a reason our sync was invented in Australia. Just the infrastructure of connectivity, even fiber, is not such that it was reliable enough. So anyway, he has been sort of reconstructing the same kinds of social media kind of tools, the same kind of consumer-facing tools that we use at a global scale at this distributed local scale. It was a really interesting set of examples because it was like, well, maybe the problem isn't in the product design itself, but in the idea that all information should be globalized and all access should be globalized. And if you kind of stop and think for a second about the encounters that you have on a day-to-day basis that are bad, where these atrocities start to sort of bubble up, it's often because you want to be working at a, maybe not a local level, maybe a local level, but at least not at the global level and you just cannot anymore. Everything is immediately globalized. So this isn't like a sufficient answer to your question, but it's one example of an intervention that, okay, it's still experimental. It's hardly widespread, but I can kind of imagine folks adopting, you start to see it happen. And there are places where there are examples at scale that are good and bad. I mean, there's this lovely slash awful social media service called NextDoor, which is mostly people complaining, mostly people demonstrating that they are in fact racists or like complaining about dog poo, all the stuff that, but as someone who's very involved in local land use politics in my community, and that's a hard sell to anyone, but actually through an example like that, which is globalized, so you sign up for this service, but then it gets localized down to your kind of neighborhood or the nearby area, you start to see more kind of productive, positive or at least functional outcomes take place, even though the stories that people like to tell is how racist everyone at NextDoor is, or there's like a parody account on Twitter that's hilarious, showing all the ridiculous things people say, but now you can in fact borrow a planer from someone down the block or kind of talk about whatever local school issue was going on, and that sort of small scale intervention seems like people have given up on that almost. Like, well, why even bother, but the sort of all politics or local aphorism is aphoristic for a reason. So I think, you know, in summary, if we have a bunch of these experiments that are not aspiring towards singular answers for everyone that are all billion plus dollar companies that take over some giant, some entire sector, that would be a good start. What about just since you were focusing on a kind of new model of localism, what about the pace issue? Yeah. Are there, you know, ways of slowing down or creating that framework that allows for a different way of modeling, of building platforms of, yeah. I mean, I think locality is one way into slowness, actually. So just the quantity of stuff you have to deal with, like think about all the things that happen in the world every day, and now think about all the things that happen within your sort of extended global community every day, and then zoom all the way down to your block or your floor or whatever, and there's just far fewer things that happen at the local scale. And one of the reasons that many ordinary people appreciate and enjoy a platform like Facebook is that unlike us, unlike people who are in a room like this, they are mostly using it as a local conversation tool with a small group of people, and then they're extending that to a kind of global phenomenon, but that's not the global phenomenon that everyone encounters. So, you know, the idea that we will slow down is not gonna, oh, let's just kind of pull the plug on this, and I mean, you can imagine these sort of like experimental, you know, like, oolipian sorts of, you know, constraints applied to these services that we use, but that's just like an indulgence of the elites to even ponder, right? So we have to get at them sideways through other means, but at the same time, you know, regulation is one of the things that slows companies down. There was this great piece recently about how Uber is in essence a regulatory arbitrage company. And so, you know, like if your core business is regulatory arbitrage, then kind of the more confusion you can throw at the apparatus while you're trying to work out the rest would work out, of course enforcing regulation at the local and national level would be another way of going about it. But I think those answers, again, they're just all like super weird, very tactical, very boring, like they're not the kinds of things that ring of this kind of like, I found the answer that we're used to hearing from Tekken that we give an audience to. Are we good? All right, well, thank you very much, everybody. Thank you, Ian, especially.