 I'm Tori Bosh, and I'm the editor of Future Tense, which is a long-standing partnership of New America, Arizona State University, and Slate Magazine, and what we do is we look at the intersection of technology, policy, and society. We have a channel on Slate, heads up to the Slate team here at Slate.com slash Future Tense, where we cover the future, and historically, we have live events. Of course, they've mostly been online for the past two years, so this is one of our very first in-person events, and it's really wonderful to be back here at New America. We're here tonight to talk about a book I edited for Princeton University Press called You Are Not Expected to Understand This, How 26 Lines of Code Change the World. It's made up of 26 essays by technologists, historians, and journalists about specific events in programming history. So we have essays on the first police beat algorithm, on the code that tells your Roomba how to navigate your living room, the first computer virus, the JPEG, and much more. And the title comes from a famous comment left in the Linux source code back in 1975. We're talking about code pretty broadly here in the book. So there are our lines of codes in the illustrations in the book, but we're really talking about kind of the bigger picture that each line represents. The big idea is to help both experts and non-experts think through how technology is made by humans who are sometimes brilliant and sometimes biased and messy, and sometimes just really hungry to get to lunch and get their work done. We're really lucky tonight to have three contributors with us. Unfortunately, Alana Batella is sick. Alana, if you're watching, we hope you feel better soon, but we have three really great speakers as well. So the four of us are going to talk for about 30 minutes or so, and then we'll open it up to Q&A both here and online afterward. Please, if you're in person, stick around, have another drink, maybe buy a copy of the book if you're so inclined and keep the conversation going. So I'm going to introduce our contributors in chronological order of their chapters. So first we have Arthur Damerich, who is the director of the Smithsonian Institutions Lemelson Center for the Study of Invention and Innovation. And starting in early 2023, he'll be the director of Arizona State University's Consortium for Science, Policy, and Outcome. So welcome to ASU. He's, his chapter of You Are Not Expected is called Space War, Collaborative Coding and the Rise of Gaming Culture. Then we have Charles Dwan, who is a postdoctoral fellow at Cornell, and a senior policy fellow at American University's program on information justice and intellectual property. His chapter is called A Failure to Interoperate the Lost Mars Climate Orbiter. And finally, we have Will Aramis, who is a technology writer for the Washington Post and my former slight colleague. His chapter is The Curse of the Awesome Button. I think we should start by having each of you tell us a little bit about the chapter and sort of the story behind it. So let's start with you, Arthur. My chapter tells the story of Space War, which is a kind of, become lore, become famous. There's actually a lot of mythology about it online. We had the opportunity to interview the surviving, seven surviving members of the eight people who coded the game to hear their first hand accounts and then have used that as a basis for this. So the game is coded on a PDP-1, so Deck outside Boston had donated, had given this mini computer that wasn't the term at the time. It was potentially not called a computer because computers at the time were room-sized machines, cost a fortune, and they were trying to actually market this to business as something useful for payroll, for accounting, for routine operations. It was selling for $120,000 at the time, which would be a little over 1.1 million today. So not a cheap piece of equipment. So there's this loose kind of quasi-pre-hacker culture that's formed around a model railroad club. Some of, and railroading, of course, is about zeros and ones, switches, about routing electricity, about doing some of the things that then influence it. But this group, and it is all men, some of them are students at MIT, others are just in the area kind of working at MIT, working at Harvard, and just discover this piece of equipment and make a deal with the faculty member who's in charge of it that says, if we write you a compiler, you'll let us do other stuff on it. And they, over a weekend, write the first compiler for the PDP-1 and then start tinkering with it, writing various programs. And they were passionate about goofy science fiction movies of the time. They would go down to South Boston to movie theaters. And so they come up with a space game. And it's two planes to spacecraft that shoot each other. And they put a sun in the middle. And then the next one says, you know, the sun ought to exert gravity. So he adds that code. And then another one says, well, you ought to be able to escape someone shooting you. So adds hyperspace. And bit by bit, not at the same time, they code up this pretty remarkable game that, again, becomes legendary. DECK eventually starts shipping it with each of the PDP-1s. So when it's installed at your corporation to show you that it works, they do it. Now, why is that interesting? Well, in part because the way they coded the game really pushed this computer to the very limit. And gaming has done that throughout the history of computing. It has been crucial to pushing computers to the limit. It's also been a market maker. So people have bought personal computers because of the games available to them. Let's be honest, most people didn't buy a personal computer in the 80s and 90s to do spreadsheets. Corporations did, but not at home. So, you know, the chapter kind of tells that story. And then there's an interesting second and third life to the game itself, to the code, which is there's an effort to make it into a commercial game in that in California they set up a PDP-1 and set up some stations and try to get people to pay for money. That isn't very economically feasible. And then it influences one of the early arcade games that Atari produces, which is basically an unplayable game. It's a real mess. It then also has a life in the Atari 2600 at home. There's a space war game. And then more recently, of course, through the diligence of a number of really avid coder historians, there are some really remarkable emulators online where you get a remarkably accurate, given the constraints. I could go on, but I'll stop there. And you've played the game, right? I've played the game. I've played it against my 15-year-old daughter who can crush me in just about any other video game, but not this one. Although it didn't hold her attention quite as long as the Nintendo does. And Charles, tell us about Mars. Yeah, so it's, I don't think it's quite as exciting as Space War, but it does involve space, so hopefully that's a little bit of something. So, yeah, so I came across a story many years ago. I think a lot of people have heard about the story where just a multiplication error ends up sending a spacecraft that was meant to go survey Mars, Orion, the spacecraft ends up being lost costing millions of dollars in lost space exploration funds. But I was curious what exactly happened with that. And so I started looking through a lot of the reports that were generated after the crash and some of the articles that were written by the engineers. And it turned out to be a very interesting sort of story. What ended up happening was that NASA was basically repurposing a program that they used to estimate the position of its spacecraft. You know, when the spacecraft is going through space, there are no street signs out there, so you have to figure out where the spacecraft is basically by adding up all the forces on the spacecraft. This would be sort of like trying to figure out where your car is by counting up how many times you press the gas pedal. But NASA was actually really good at this. But in order to do that, they had to get input from the spacecraft of all of the forces that were acting on it, particularly a couple of little thrusters that helped to push the spacecraft in the right direction. And they contracted that part of the software out to, I think, Lockheed Martin. So Lockheed Martin wrote this little program that collected the information from the spacecraft, turned it into a data file that then fed into NASA's program. And NASA's program expected metric units except the Lockheed Martin program produced imperial. And as a result, everything was off by a factor of about four and a half, meaning that by the time that they got to Mars, the spacecraft was off by, I want to say, like millions of a percent. It was a very, very small amount. But, you know, given the amount of distance that it traveled, and given that you're trying to figure out where the spacecraft is by just adding up all the forces, it was enough for the spacecraft to instead of be an orbit above Mars to be on Mars, and as a result, it probably crashed into the Martian surface. And so, you know, I think it was just an interesting exploration of what happens when you have that sort of collaborative coding that potentially doesn't go the way that you would hope for it to. Now, Will, tell us about the awesome button. Yeah, my chapter is the only one of these three that doesn't involve any spacecraft or rocket. But it was when Facebook was just getting started, they, and it was mostly just on college campuses, they would notice that when somebody posted something, you know, having a party or just, you know, just ace that test or like, man, I was so wasted last night, people would comment like, oh, cool or great or awesome. And if it was like a post that really appealed to people, it would just get, like, comment after comment after comment, like 30, 40 people saying, great, good, awesome. And so, Facebook at that time was defining itself in opposition to MySpace. And MySpace was the incumbent, it was the dominant social network, it was, it was maximalist, it was very cluttered, it was like, you know, everything, everywhere all the time. And Facebook, you know, prided itself on being sort of clean and functional and, you know, user-friendly. And so the designers at Facebook, some of the designers were like, I don't like, you know, seeing all these comments, it feels inefficient. Like, it offended their design sensibility and probably their logical brains as well. So they decided to try to come up with some easier way for people to express approval of a post. And so they initially conceived of it, they started a project called props, they wanted a way to give props. It seems like, I mean, in retrospect, the like button feels obvious or inevitable. At the time, it did not. It was not, you know, it was not clear, would you, you know, would you make, would you even make it a button? I mean, buttons weren't common on the internet in 2007. Would you, if you did make it a button, would it just be a picture? Would it have words? Would you, would it be upvotes and downvotes? Would it be yes and no? And so my chapter is about the process of figuring out what that button should look like, about the thinking that went into it, and also about the thinking that didn't go into it, which was what might happen if this becomes the universal currency of content across the global internet. And so it's partly about the unforeseen consequences too. So, I mean, one of the goals for this book is both to help people who don't think about these things for a living understand the thinking that goes into them among those who do, but also to help people who do these for a living kind of think bigger picture about the work they do and where they might be able to think about it a little bit differently. So, I mean, I suppose if there's like a moral to your story, what would the moral be, Arthur? Wow, a moral, that's great. No, I like morality tales about technology. So, I would say one moral that we tend to resist wanting to draw from it is that in a world in which there is no intellectual property at stake and next to no financial reward feasible, it is still possible for a group of enthusiastic people to organize themselves. To have some degree of hierarchy, Steve Russell kind of becomes the point person, writes the initial code, but they're also kind of checking with him. Is that okay as they go? But also you can actually have a disaggregated team working asynchronously producing something pretty remarkable. So, out of the space where coding project, it's not an organized project, we get one of the very first video game controllers with a button and a couple levers that will eventually become a joystick. We get expensive planetarium, so a star map, we get expensive typewriter. So, a way to code in text as opposed to having to do it on a typewriter as opposed to pure machine language coding. So, yeah, I would guess that's one piece. How often you could replicate such a thing? How often you could pull the right set of people? Would you get a much more complex piece of software out of it that we use today? That's a little less clear. Charles? Yeah, I think that's actually such an interesting lesson. It kind of carries through a lot of the stories of the book, just the role of this sort of open innovation, this sort of very academic approach, which of course contrasts very much with the story that I have where we have a big government agency and a big company who are doing the coding. I think that the interesting thing about the story that I found at least was, this is a book called 26 Lines of Code. So, where was the code that went wrong? On the one hand, NASA says, well, it was Lockheed Martin's fault. They're the ones who are supposed to have multiplied by 4.45. The contract said that they were supposed to use metric, they didn't use metric. On the other hand, Lockheed Martin says, no, no, no, it was NASA who made the mistake because NASA told us, here are a couple of sample files and the sample files they sent in order to test the program were apparently written in imperial units. So, they said that everything was right. So, in a sense, Lockheed Martin's code worked the way that Lockheed Martin said it should, so they were correct. NASA wrote its code in the way it expected the code to be, so it was also correct. The mistake is somewhere in between. It's sort of a space of communication that occurs between two sets of programs and that's not part of a computer program per se, but it is a form of code, a way in which computers talk with each other. It's a specification of what we now call interoperability between computer programs. I think, at the time that I was writing this, the Oracle versus Google litigation over the ability to write compatible versions of Java was on the table. We had a lot of questions about social media companies letting other social media companies be able to come in and interoperate with their messaging platforms and such. So, the idea that we should really be looking at that sort of in between space as an important part of the coding ecosystem that really defines what the environment is and what the technology and policy look like, that seemed like just an important lesson to me and so I thought that that was an interesting thing that I was able to draw out of that story. Well, one way to think about the moral of the story of the like button. Well, let me back up. So, I talked to some of the people who were involved in designing and coding and implementing the like button for Facebook and all of them said that at the time they were building it, they had no idea that it would go on to become such an influential tool and that if they had known that, you know, they might have thought about it differently. One of the people who designed it, a woman named Leah Perlman, said that she thinks the mistake was putting a counter on it. So, if it had just been like, you know, that would have been one thing, but when you can make the number go up, it turns, it gamifies the whole system and, you know, trains people to try to get more likes and in many ways I think the digital media world that we inhabit today was shaped by the like button with the counter. So, I thought that was a fair point. Another one said that they, you know, they didn't regret it because they couldn't have imagined doing it differently. They couldn't have known at the time how it would turn out. I think one, if I were to impose a potential moral, I guess, you know, Kant said that we should universalize our actions like we should imagine what if everybody did the thing that we're proposing to do in the same circumstance. You know, can we envision that world? Would it even be possible? Would it be desirable? So, you know, maybe coders who are working on a tool, even if they're at a little startup that has three product managers, which is how many Facebook had in 2007, in a little, you know, a little storefront in Palo Alto, maybe they could think, what if this little thing I'm designing were to become huge and blow up? What might be the downsides? That's asking a lot. And the caveat to that moral and the reason I think it's not quite that clean. So, I go back to what Ezra Callahan was, the internal communications manager at Facebook at the time. And I talked to him about it. And he was the one who said, you know, I don't, I feel sort of bad about some of the effects the like button has had, but I don't regret it because I couldn't have done it differently. And he said, even if we hadn't built a like button, somebody else would have. And they probably would have outcompeted us. And the like button would have become ubiquitous anyway. I thought that was a really interesting and sort of dark point. Like, you know, even if you were to take this as the moral and apply it, maybe that would just mean your business doesn't win. And the one who doesn't beat you up. And it's, the chapter is called the, the curse of the awesome button because that was the original name for it, right? Yeah, so they, their project was codenamed props. And the first implementation was actually, there's a tie in with the collaboration stuff because they're the first implementation was at a hackathon. So the way Facebook worked in those days was periodically they would have people would put a bunch of ideas on a board. And then the ones that got the most votes would be the subject of a hackathon. The most likes, if you will. And so I don't know if they have down votes. And so the, the props project was the subject of a hackathon. And the initial, the concept that won the hackathon was a button that said awesome. And it was in keeping with the ethos of Facebook at the time. I mean, this is back when there was the poke. You know, there were, there were like all sorts of weird little things about it. But, so they, they ended up building the awesome button as, as according to this concept that won the hackathon. And they were pretty excited about it. They thought it was going to get approved and they sent it to Mark Zuckerberg. And he surprised them by saying, no, he didn't like it. He didn't like, he was worried about several aspects of it. One of them was he was worried that it would cannibalize other forms of engagement. So people could just press like, then why would they craft a thoughtful reply? Right? Or why would they share something? So, and, but he also didn't like the name awesome. He thought it should be like. And the project got tabled. It sort of like lost its momentum. And it was, it did a kind of languished for a while. And people were like, I don't want to work on that. It's just going to get shot down again. So it didn't get really taken up again for another year or two. And finally implemented, I think in, in 2009. So it was, in between it was, the project was considered cursed. They called it the curse of the awesome button because nobody could figure out how to do it in a way that would get approval from the top or that would work in all the way that needed to work. So this is, this idea that like, if we hadn't done it, someone else would have. I mean, I guess it raises for me this question of sort of responsibility and, you know, is it sort of like decrying your role in it to say it would have just kind of happened anyway? I mean, and I guess this goes to your point, it's about collaboration as well. So like, in a collaborative coding environment or in any kind of innovative collaborative environment, I mean, are there ways in which people kind of start to work together in sort of directions that then maybe be course corrected? I mean, I don't know if you thought at all about how collaboration can kind of end up kind of obscuring responsibility in that sort of thing. Yeah, I mean, that's a pretty profound philosophical question in how we do technology development. But absolutely. I mean, we've certainly seen that, you know, greater diversity of teams generally leads to more innovation. So innovation in the sense of a better understanding of the eventual consumer, a better understanding of the breadth of a market. Certainly the space war was a narrow band. It was white men and the users of the game for a very long time were white men because it's a PDP that's either going into corporate or university settings in a time in which we know women were active coding in the 50s and actually in the 60s. But these mini computers in the computer lab become a very male domain. And so, you know, I think that's something to point to of how, you know, where do we draw boundaries of what we consider to be in group for collaboration and the problems that can, you know, engender for lack of a better word. And there are other chapters in the book, most notably, I think, Joy, Lisey, Rankin on Dartmouth and basic, and then Claire Evans on cobalt and genie summit that talk a lot about the history of women and computing and how it was, you know, a much more egalitarian sort of system until these spaces, specifically at elite universities like MIT and Dartmouth kind of started to dominate and started to sort of overcome the preexisting structures of more equality or at least more access. So, Charles, you know, you talked a little bit about interoperability and social media at the time you're writing the chapter. It's something that I think has become a lot more on people's minds right now as Twitter is in its very strange moment. You know, and I'm curious whether there's any discussion going on now about interoperability and sort of the next phase of the internet as we're sort of starting to see kind of the, as the Atlantic said, the end of these sort of major social networks and more sort of atomization of the internet. Yeah, I think it's, there's sort of pressure coming from two directions that I've at least been following. So the first is that there is some, you know, there's a lot of concern about, you know, these big platforms. You know, when, when Elon took over Twitter, there was a lot of talk about moving over to Macedon, which is a more distributed sort of microblogging platform that allows for people to have different, different servers that they're working off of. All of that, of course, depends on the ability of these different servers to talk with each other. So the sort of idea of a distributed network in which you don't have a single platform that kind of controls everything, much like what the internet kind of looked like, you know, back in the, the like, 80s, 90s, early 2000s, that's becoming I think more popular just as people get concerned about this. At the same time, at the top, legislators are sorry to worry about a number of things with, with regard to some of these, these larger platforms, their content moderation policies, whether or not they have too much influence over advertising. And so, you know, there have been a number of efforts to look at different ways of trying to deal with that in, in terms of regulation or policy. But the idea that maybe we should let some of the smaller platforms come in and be able to communicate with these large networks of friends or message users on Facebook or some of the other platforms, that's become a very attractive idea because of the fact that it lets people, it lets these smaller platforms at least have a foothold in the market. So I think it's going to be an interesting, it's going to be an interesting conversation. Now, what's been kind of going on on the technical side, of course, is that we went from this sort of era of like very free innovation. You had like the ITF who was just saying, you know, here are the standards for the internet. Anybody can use them. Anybody can build a website. Anybody can, you know, follow these specifications to a world in which a lot of the standards organizations are much more dominated by these bigger companies that really want to kind of keep things to themselves. And so that sort of, as a technical matter, that sort of era of sort of easy compatibility and open collaboration I think has changed a lot. And whether or not these sort of political changes push that technical environment in a different way, I think it's something that we'll have to see. The web is like such a beautiful thing because anybody can build a website and they don't need the permission of any platform to build it and, you know, all these different browsers can access the websites and Mozilla is a force for trying to keep that alive. It is interesting that the interoperability stuff, I mean, it's also become an issue in the antitrust scrutiny of the big tech platforms. I run into something at home that's not as momentous as sending a spacecraft into Mars, but I have a Sanos smart speaker and it runs, it can run either Google or Alexa, but it doesn't run them that well. So I have a Sanos and then I have a little Google and they both run Google and they're supposed to talk to each other, but they always get confused as to which one I'm talking to, like the wrong alarm will go off on the wrong one. It's because Google doesn't care to help Sanos interoperate with its technology and Sanos is actually, I think there's a loss, I don't quote me on that, there's some kind of lossy, there's some kind of kerfuffle around the interoperability with that, but just an example of like how, like even in our daily lives, the failure to interoperate has an impact. Yeah, I mean, you know, my favorite example is you ever tried to open a non-Microsoft Word document on Microsoft Word, why does this thing look all weird? Like that's sort of the classic example of what can happen when interoperability doesn't work. In fact, actually, one of the things I found when I was researching the Mars climate orbiter is that for a while there was a file format problem and some core clerk over at NASA actually had to hand rewrite all of the data files to send them over to basically translate them between the two computers. So yeah, it can be an annoyance. For four months, right? Every day for four months or something inserted like that? So I can just imagine that every day your job was to take this Word print file and retype it as a Microsoft Word document, like that was this poor guy's job. Yeah, it's, you know, I think that's, we often take for granted that coding is going to be the sort of like easy thing that's going to be open for a lot of people. And I think the environment has changed a lot in that way. At one point that was the way you generated transmitted knowledge, rewriting old texts and medieval monasteries, but I'm curious about sort of the absence of testing. You know, so the space war story is all about incremental change. They keep changing the code, running the program. Does it run or have we now overtaxed the machine? How can we tinker it? How can we, you know, make this work? It sounds like they did one test run and then they did this. So is this like a failure of modeling or is it really, and if you're going to launch a spacecraft, you can't really test it? Yeah, so I think that they had some, at least from what I read, they had some sample files. And so they ran the sample files, but it seems like the sample files did not actually match what NASA wanted. So there may have been some sort of disconnect there. It's not exactly, I mean, like everybody's blaming everybody in this sort of situation, unfortunately. And I think part of the problem was it was such a small error that it would have been hard for them to sort of numerically see. Oh yeah, but I mean like in terms of the overall calculations, like if they were to look at the, if they were only to look at the final results, they would see numbers that were not very large. But yeah, I think that, you know, one of the questions is, you know, when you have code that is so easy to write, but also so easy to write wrong, right? What do you do to make sure that things are going correctly? You know, we were talking about sort of the ethical issues. I've been lucky enough to be on a couple of National Science Foundation funded projects. And one of the things that they do is they asked to have, they asked to have lawyers and ethicists work with the computer programmers to see if they can identify any sort of issues with these new technologies that they're building. And you know, I think it's a really interesting opportunity for collaboration. It's also really hard because, you know, as a lawyer, I'm seeing these technologies sort of in the frame of mind of what I know. And I can't imagine what people are going to use these sorts of things for, like with the like button. We didn't know that that counter would end up kind of creating all of these sorts of problems. So there is sort of that difficulty there. Right. I mean, also you can't get bogged down every single day and like what are the great ramifications of this tiny thing I'm working on, because most of the time it will be nothing. It's just that periodically it will be something. In another sort of example of this, we have a chapter from Ethan Zuckerman, who's a tech advocate who in a story he's now probably really tired of telling at this point, but he told it one more time for us. He coded the first pop-up ad, which then, you know, completely changed the internet, made the experience worse for all of us who remember the internet in the 90s. And I had my Backstreet Boys fan site on GeoCities, the pop-up ad, just they just never stopped. But you know, he was just trying to do what his boss told him to do. And then it ended up kind of infecting the entire internet in these ways he's still sort of grappling with. So it's, you don't know exactly what could end up having those huge consequences. Now we're going to move to questions in just a moment. But I mean, one other question I had for you is, of course, again, going back to Twitter, because it's, as Will knows better than anyone, the tech story at the moment, or one of several tech stories at the moment. You know, when Elon Musk came in, there was this brief edict that all the engineers had to print out all of the code they had worked on in the past 30 to 60 days to be reviewed by Musk and his team. And then that was overturned and everyone had to shred their code. But I'm curious about that first initial impulse, which is the idea of looking at code as a proxy for productivity. And, you know, does that seem like an effective way to think about it? Yeah, this is really funny. I mean, the Musk Twitter saga, it's like watching what you know will be a future movie, like unfold day by day. Like we can all imagine the scene in the movie where the 50 Tesla engineers show up at the Twitter offices and start making everybody print out their code and nobody lines up at the printer and like getting these sheeps of code and then walking into the offices. And then later they have to shred it all up because they're like, oh, I guess that didn't work out so well. But yeah, it was really funny. There was this, Musk, I guess, had the idea that, you know, he wants to keep the people who are like the hardcore coders who are really, you know, churning out the code and not, you know, he's tweeted this meme where there's like the one construction worker digging in the hole and like the 10 standing around. He thinks that's what things are like at tech companies like Twitter. And so he wants to get rid of the 10 people standing around and just keep the one who's in, you know, in the hole digging. And so looking at how much code they had written that was in the Twitter code base was his proxy. I think most people who work in software development would probably tell you that's a terrible proxy. I mean, just, you know, at the most basic level, elegant code has fewer lines rather than more lines. But I don't know, what do you all think about that? I didn't get the printing part. Like, couldn't you just read it on a screen? But yeah, I mean, you know, I used to work as a programmer and, you know, we didn't do this at my company, but a lot of companies back in the, you know, back in sort of the mid-2000s, 2010, even, they would use, what was it? Slux, SLOC. It was sort of this measure of how many lines of code did you write? And people quickly realized that if they just put more line breaks in their code, they would get more lines of code. And so you ended up just with the sort of odd gaming of the metric. So yeah, I was thinking like that really seems like a throwback to a practice that I'm pretty sure everybody's discarded by now. I'm imagining what the Twitter engineer is like changing the font to 16, like double space. Like a high school tournament. Yeah, add a dozen sub-routines. I'm struck by that, in part, because Musk, of course, actually coded at times and created a video game when he was young. And, you know, in theory knows a thing or two or in practice does. So I'm astonished because absolutely, you know, there's a legacy, certainly in the video game, but other programs as well, that you really, for decades, have to optimize and shrink the code as much as possible because you're pushing the computer to the very limits of what it's going to be able to do. Well, I think we are at question time, which feels crazy to me. If you have a question, we've got a microphone at the back, since this is going to be, this is also online. So if you could go there just so people online can hear you, that would be wonderful. Thank you so much. And people online, if you have a question, we are monitoring the Zoom so you can submit it and your questions will be asked. There's some story. I'm forgetting which engineer it was. Maybe it was Lennox Verbal who, like, did some more. He deleted 40,000 lines of code and replaced it with 200 much cleaner, clearer lines of code and wrote down, you know, contribution deleted 40,000 lines of code. So my, the question I had was first directed something Will was talking about with the Mars probe, but it actually, I think, ties over. So one of the things everybody should understand about software, there's not a single programmer that writes perfect code that is correct as it comes off their fingertips, not a single programmer, right? And a lot of the things about software is how do you test the software and ensure that's correct and that typically means not just running it on the one sample file you came up when you first think, but how do you get realistic data? And certainly these days anybody thinking, oh, we're going to take two pieces of software that were developed independently and put them together and not test that combination, that would never fly in mind. You always do integration tests. But this actual problem of testing software that might work well and do what it was intended to independently and testing the combination is actually a much bigger problem. When you talk about the awesome button, one of the problems is when they thought about the system, they were just thinking about the software system, they weren't thinking about the system of the people and the impact it would have on people when you gave them the like button, right? One of the things about space war in a lot of games, a lot of companies, they talk about dog-fooding software, which is where the developers use the software as soon as it is possible to use the new version of software for your own person to do it. The fact that the people were playing space war insistently probably really helped them get all the bugs out. But so particularly for Charles getting back to the question, so what was the status of NASA doing integration testing? Were they really not doing any integration testing at that point in their software journey? Yeah, so I'm not going to defend them because I mean like obviously they have pretty things to say. Yeah, so at least from what I saw, they did not seem to have that sort of testing. So I think that that was part of the recommendations from the task force that they needed to do that sort of testing. And yeah, so my recollection is that that came out of the, that's what came out of their post-mortem task before. But I think it's correct. And this was obviously some years ago and at a time that sort of testing activity wasn't quite as valued. You know, when we were talking about the Elon Musk code thing, you know, one of the things I was surprised is that he didn't ask how many test people made because that is probably more indicative of what the Roger code is than he did. So yeah, I think you're exactly right that that really is something that they kind of dropped all. And for Will, was there ever any thought in, you know, maybe now there is, but at the time in Facebook, were people thinking about how is this going to change the users? How is this going to change user behavior? There was a some degree. And so again, there was that concern that by offering people, but by just offering one button to express approval that people would just be lazy and hit that button and then not actually have sort of human exchanges. Interestingly, it was testing that saved the idea of the like button. So one of their data scientists put out a test where they had, you know, some portion of users with access to the like button. And this is extremely common practice now. And they looked at the results and they found that actually the number of comments did not go down when people were given access to the like button. And in fact, engagement went up. And that when they brought that to Mark Zuckerberg, he said, all right, let's go for it. That's another thing a lot of companies do now. They do something called AB testing where some percentage of their users are randomly selected to get a proposed new version of the software. And then they collect data on how that does. That's right. And Facebook is really aggressive about that. They had a system, at least when I visited their offices a few years back, they had a system where you could first deploy with like very little permission, very little bureaucracy in the way, you could deploy a test to like an infinitesimal fraction of users. And when you have like, you know, whatever is a billion, two billion users, you know, you can still get, you know, several thousand people. And if you break the site for several thousand people, it's like not the end of the world, right? And so there was that level. And then if it if it passed certain benchmarks at that level, you could get to do like a 1% test and then like a 10% test. And then they would like roll it out in Ireland. And then yeah, so it was like a whole a whole cascading process. Of course, the AB testing has raised some ethical issues as well, right? Because of the sort of like experimentation without consent. Absolutely. Yeah. Thank voice. Hey, everyone. This is more of a question than a comment. But Tori, what is the most someone just needed to get to lunch story from the book? Oh, I mean, it probably has to do with the pop up ad, which is the most like sort of hands on, I was the person doing this, and I just needed to get my job done. And, you know, we needed to find a way to present more ads. Because at the time, that was the only way to really monetize on the internet. Now it's still one of the best ways, but that kind of was really the only way. How can we show even more ads to people? Like, I know, let's force them to open up a new window. And yeah, that was a just try to get it done and deal with it later kind of situation. Josh Levine. Hello. Thank you guys. This was great. Will, so you would have us believe that Facebook was just this naive company. They really have no, you know, only three project managers. What just the fact that this thing will like completely like rocket our company to the moon and just like have everybody just addicted to them. That wasn't something that was on their mind. That was absolutely Mark Zuckerberg's goal from every from everybody I talked to he was he absolutely had world domination in mind from the very beginning. He idolized Bill Gates growing up, which is funny because Bill Gates was sort of seen as the villain back then, you know, like Steve Jobs was the hero, but Zuck identified with Gates. And so, yeah, I mean, that was always Zuckerberg's goal. But no, for the designers and engineers who were working on it, I think they really did. I think they really were naive. I mean, maybe I'm just buying, you know, a line from them, but they really thought that connecting people was like a good in itself. And I think there was there's like a part in my chapter where I talk about how at Google and Facebook, the idea was that you can do good and make money at the same time. And there's no conflict. It's all win-win. And I think they some of them really believed it. I just had one other thing, which is that you mentioned that they considered up voting and down voting. And I remember like from around that era, you dig and read it. And that was huge. And I would have thought back then that that would have been the thing that would have kind of come to dominate the way the like button has. So did anybody kind of tell you or have you thought about why up voting and down voting, which seemed like it would become the dominant thing, is now kind of receded? Yeah. And it goes back to the same theme. I mean, the people I talked to who worked on it said it was about positivity. They wanted Facebook to be a happy place where people were encouraging each other. Not, you know, some of them, again, probably idealistically and naively, I think in Zuckerberg's case, because he knew that that was what would keep people coming back. I mean, if you get positive validation when you post, you're going to post more. If you get negative validation when you post, you're going to be deterred or you're going to feel bad. You're going to feel angry. And so it was absolutely intentional that they only had a thumbs up and not a thumbs down. If we think through some of the ramifications of that over the years, it means that, you know, if half of people like a post and half of people hate it, the people who hate it have no way to register that. It's just the people who like it. And so very divisive posts get amplified without the chance of other people saying, no, don't amplify that. So I think this is actually a really interesting point that comes out of the book he's edited, which is, you know, in the history of computing and the history of this field, there's been this push, pull tug of war. Is it about hardware? Is it about software? And now we need to tell integrative hardware software stories. What you're now doing is pushing in a way the way we think and write and talk about this into saying, well, yeah, hardware software, but also the human behavioral engineering that's happening, whether it's very explicitly or tacitly. Yeah. And that's one of the big sort of goals of the book is to help people realize that technology is not necessarily inevitable, right? That every bit of technology is the result of a human decision. One more question. Thanks. I am Bobby and I'm with Mozilla Foundation actually, so I was really excited when Mozilla came up. Yeah, open source. But my question is, I'm part of this program called trustworthy AI. So when I encountered the book, I really was thinking about transparency, right? And I was really curious about how did that, like, what is meaningful transparency and actionable transparency for you in the context of the different case studies that you explore, and how do you see transparency to whom as part of the work? Like, is it users? Is it policymakers? Is it advocacy groups? And yeah, curious about that notion of transparency. Well, I mean, one thing that comes to mind for me is just that, we tried to have a line of code to represent every chapter. And for some of these, it's actually hard to get, right? A lot of these companies don't want to release their code for very obvious reasons. I mean, it was a bit of a coup that we were able to get the Roomba code that we use for that chapter, right? And that lack of transparency makes it difficult for everybody to sort of take a look at under the hood and understand how these things operate, which, as you know better than anyone, has all sorts of implications for things like equality, right? I mean, if we can't see what's in the algorithm that determines how long someone is sentenced to prison, you know, we don't know what the factors are. And those factors come from the existing structure. As Charles McIlwain's chapter on the police be algorithm chronicles, this is a problem that goes back to the 1960s with law enforcement and technology. So, I mean, that's my first sort of response is that we just don't know what we don't know with this lack of transparency out there. But I'm curious what all of you think, too. Yeah, I mean, I was thinking, you know, you were talking about how we had, you know, at one time this era in which people would just share code with each other and who would kind of know how everything worked. And that's obviously changed a lot given the sort of commercial stakes and intellectual property concerns that have come up. Of course, there are limits. And, you know, I thought that the Heartbleed story was actually really interesting in that respect because that dealt with a bug that was in open-source software. But just because there wasn't sort of the, there wasn't the architecture for maintaining the code for a kind of code review, there ended up being this very serious bug that implicated like every single web server that was out there. So, you know, I think that it's, so code transparency is kind of a part of it, but then also building the institutions that make sure that the sort of code is working the way that you expect it to and is being, being taken care of, I think that that's another aspect of what you're talking about. I think we have time for one more question. Hi. So, I think Will, you mentioned how the thinking behind sort of not wanting to have the like button was, you know, my cannibalized sort of other kinds of engagement. I'm wondering if you or anyone else on the panel knows, knows or has a guess about, you know, the thinking behind, was it Instagram? I think that briefly sort of got rid of like having the counter, like you could still, you know, like a post or whatever. But it wouldn't tell you how many people actually liked it and then it seemed like you could go back and like change it and people sort of didn't like that. But like, do you know or have a sense of like what the thinking was behind like getting rid, rid of the counter? Was it also guided by, you know, thinking about engagement or was it more sort of like the, I guess the moral or ethical reasons about, you know, like, oh, you know, that's not the important part. You know, it's not like the number, but it's about the engagement. But do you have a sense about any of that sort of, you know, decision making? Yeah, I actually don't know the latest status of it. I haven't kept up with it. But the, the guy who was put in charge of Instagram after the founders were forced out out of a series, a guy I know, and he's pretty, you know, in my experience, he's pretty thoughtful about, about the social impacts of technology. And he was attuned to some of these debates that, you know, maybe started to surface between 2017, 2020, about the pernicious impacts of gamifying social media. And in particular on Instagram, you know, when, when young people are using their Instagram posts as, you know, their public facing identity and able to compare how many likes their selfie got with their friend's selfie. And that can be, you know, that can be a hard and stressful thing. And so, you know, I don't want to paint them as too idealistic. I think they, they saw a way to address that issue without losing any of the data that they have. So they still count, right? They can still tell how many posts a like gets on Instagram because they use that to decide how many other people to show it to with the algorithm. And the person who posts can still see theirs. They just can't see other peoples. So again, I don't know the status of that experiment. I do the one other thing I'll say and then I'll show up for a second. There's a guy I've profiled who's an artist and professor in Illinois named Ben Grocer. And he does these, he did this experiment called the Dometricator. And it's a browser plugin that you can use. He has one for Facebook, one for Instagram, one for Twitter. It takes all the numbers away. And he was showing what it feels like. And I used a Dometricated version of Twitter and Facebook for a while while I was writing about him. And the best analogy I could come up with is like, it was like watching a 60s or 70s sitcom with the laugh track taken out. It was like, it wasn't, you didn't know what was supposed to be funny exactly. And it was like disorienting and confusing. Which is not to say it's a bad idea in the long run, but I can see how in terms of like, you know, the gamification of social media is part of what makes it what keeps us coming back. And it's how we've all learned to engage with it. So it would be a significant change. So it's funny you talk about gamification, right? So the video game area, of course, built initially for years, decades, all about leaderboards, about scoring. Of course, there's so many games now. There are all kinds of different kinds of games today. So there's certainly a way for people to find a game they love where you're exploring a universe or, you know, maybe fighting and killing things, but you're not scoring points per se. There's not really a leaderboard in the same way. So yeah, it'll be interesting to see if eventually social media can evolve into, I don't know, more of an open world terrain. I mean, I guess the other thing is also that kind of in conjunction with all the metrics and the numbers that you see on social media, there's a whole industry built around trying to manipulate those numbers, right? And some of it is somewhat less legitimate than we would like. And so I imagine that a lot of these efforts are the platforms trying to deal with that sort of problem of people trying to inflate their numbers or hiring people or buying likes and such. Well, I could keep talking about this for a really long time, but I think we have to leave it there. Please do stick around, have a drink, keep speaking with our wonderful panelists. Thank you all so much for coming. Thank you so much to everyone at New America, ASU Insights for supporting the book and Future Tents. And thank you all for your contributions to the book and your contributions tonight. If you're here, please consider purchasing it or if you're online, please consider purchasing it. And congrats, Tori. Congrats, Tori, yeah.