 So let me invite our first two panelists to come up now. Jennifer Goldbeck is associate professor in the College of Information Studies at the University of Maryland College Park. She also directs UMD's human computer. Come on up and sit down, guys. Also directs UMD's human computer interaction lab and studies how people use social media and thinks of ways to improve their interactions. Ian Bogost is Ivan Allen College Distinguished Chair in Media Studies and Professor of Interactive Computing at the George Institute of Technology, Founding Partner at Persuasive Games, LLC, and a contributing editor at The Atlantic. So welcome both of you. All right, so our topic is, what should we know about algorithms? What should we know about algorithms, Jen? Oh, you know, so I talk to people a lot about algorithms and the ones that I work on as a computer scientist are building algorithms that can take the digital traces you leave behind, whether it's from the Fitbit, especially social media, but any of these traces and use them to find out secret things about you that you haven't volunteered to share because all kinds of things about you come through in those patterns of behavior, especially when you take them in the context of hundreds of thousands or millions of other people. So when I go talk about this, the thing that I tell people is that I'm not worried about algorithms taking over humanity because they kind of suck at a lot of things, right? We're really not that good at a lot of things we do, but there are things that we're good at. And so the example that I like to give is Amazon Recommender Systems. So you all run into this on Netflix or Amazon where they recommend stuff to you. And those algorithms are actually very similar to a lot of the sophisticated artificial intelligence we see now, it's the same underneath. And if you think about it, most of the time the results are completely unsurprising. You bought the Stephen King novel, here's 10 other Stephen King novels. Sometimes they're totally wrong, and you're just like, why would you ever think to recommend that to me? And then sometimes we get this sort of serendipity that you mentioned, these great answers. And my favorite example is that I had bought the Zombie Survival Guide, which is exactly what the title suggests, like an outdoor survival guide, but for zombies. And yeah, I read it very quickly. And the next day I go back and Amazon is like, oh, you know, since you bought the Zombie Survival Guide, you might also like. And it has other books by the same author, World War Z, which was made into a Brad Pitt movie, which you maybe saw, some other zombie books, a couple zombie movies, and then this camping axe with a pull out 13 inch knife that's in the handle. And I was like, that's exactly what I need, right? Like the book was telling me this. And then I was like, okay, it's probably not something that I need, but I bought it anyway. I thought it was just such a great example of like, I never would have gone looking for it, but it was such a cool thing to recommend. And so, I think the thing to know about algorithms is that that's generally what they do. They usually tell us stuff that's not super surprising or that we kind of could have figured out on our own, but sometimes they give us great insights and sometimes they're wrong. And just like you don't watch in order everything that Netflix recommends or buy in order everything that Amazon suggests that you should buy, the thing I think we really need to keep in mind with a lot of algorithms today is that they're gonna tell us stuff, but we absolutely have to have intelligent humans taking that as one piece of input that they use to make decisions and not just handing control over to the algorithms and let them make decisions on their own because they're gonna be wrong a lot of the time where they're not gonna do things as well as a human would do. Ian, what do you think? Yeah, I mean, I've become really interested in the rhetorical register of this word algorithm, how we use it. I did this piece for The Atlantic earlier this year called The Cathedral of Computation in which I sort of said, anytime you see the word algorithm, especially in print and the media, if you try replacing it with God and ask if the sentence kind of works, it usually does. There's this anxiety we have, what Google has tweaked its algorithm, or what are the algorithms doing to us? How are they making decisions on our behalf? And in what way are we sort of pledging fealty to these algorithms? So there's a sort of techno-theocratic register to the concept of the algorithm. And there's this mystical notion about it too. I think one of the reasons we love algorithm instead of computation or software is really we're talking about software is what we're talking about. Like when we say algorithm, we invest this kind of orientalist mysticism into fairly ordinary experiences and services and so forth. And this idea of the poetry of computation is interesting because I think it helps us kind of get under the skin of the rhetorical nature of the word algorithm and not just the word, but how we use it. When you think about that idea of the poetry of computation, it should kind of terrify you that okay, well, if we're gonna run our lives, our airplanes and our automobiles and our businesses on poetry, on these sort of poetic moments. It's not because we distrust poetry or because poetry isn't good at what it does. It's because what poetry does explicitly is to defamiliarize language, to take ordinary speech and to show us something about that speech, to reconfigure the words that we normally use in a different way. And this aesthetics of the algorithm common in computer science. Of elegance, of simplicity, of tightness, of order, of structure, rationalism, all of those sorts of features are fantasies to some extent these are messy, disastrously complicated, computational and non-computational systems like Amazon has a logistics system and warehousing and all these factory workers and warehouse workers they're abusing and so forth. And all of that stuff we'd like to cover kind of cover over it, when we're able to simplify it to kind of point to this mystical, god-like figure and say, oh, the algorithm is in charge, then we feel better about that gesture. So maybe one way of thinking about algorithm is as a kind of synecdoche, that rhetorical trope where you take a part and you use it to refer to the whole. So we talk about Washington instead of the federal government. When we do something like that we kind of black box all this other stuff and we pretend like we can point to Washington and that that sufficiently describes the way that the federal government does or does not function, which of course it doesn't do, it allows us to simplify the abstract. So yeah, I mean the technical aspects of algorithms I think have become much less interesting culturally speaking than the rhetorical functions how we see this term and this concept weaving its way into our perceptions, into the media, into ordinary people's conceptions of the things that they do and kind of fit bit more something about me and so I'm gonna use it. I think those are somewhat underserved perspectives. And I think yeah, as we engage with these systems more they become more and more important for everyday individuals, no longer sort of technical experts or somebody who's designing an airplane or flying an airplane, we're all and we're dependent on algorithms in many ways now and in many new ways that we weren't even say 10 years ago. I'm really interested in this notion of de-familiarization that you both brought up in different ways and in part this is about black boxing things and abstracting things, in part it's also about the learning, well sort of the unintended consequences you might say you were talking about the digital traces that we leave online which is a topic of great interest for me as well. And one thing I think about is all the copies of ourselves are the versions of ourselves that are created, profiles that are aggregated by different companies and then potentially sold, how little access we have to them. And so is that something that you think about as well, Jen or do you think that there are, is the conversation moving forward about that? Are there ways, are there are people learning how to read these digital versions of ourselves more effectively or is this a morass we're just sort of beginning to work through? Yeah, I mean I want to say that we're getting more sophisticated about it but then like if you actually look at it I'm not sure that we are and there's so many facets to this but I guess a couple that I think are interesting. You know, one I like to start with that Netflix Amazon example because it's a way that we're all interacting with this technology that if we talk about it it sounds like these terrifying black boxes who maybe are so much smarter than us and we don't even know how to handle it except we totally do because we use it on Amazon and Netflix all the time, right? And that's exactly the same thing as the scary AI that Stephen Hawking says is gonna ruin humanity, right? We actually know really well how to deal with it when it's presented in that way. On the other hand, if we look at the kind of virtual versions of ourselves I think we can look at our own virtual versions and understand and process those and when I talk about the kind of algorithms I make I get a lot of pushback like well, the version of myself that I have on Twitter like that's a really professional version that's not how I actually am and so maybe it's not gonna find the right things out about me and maybe that's true and maybe not sometimes depending. Are you saying that the acts did not feature prominently in your Twitter persona? Actually you probably could totally find the acts looking at my Twitter persona. I talk a lot about zombies online. But you know, so we can say that for ourselves, right? But then if you look at how we treat other people online, these digital versions and especially when people get themselves in trouble, right? The one bad thing that somebody does online becomes the entirety of that person as we view them and algorithms can see beyond that but we as humans often can't where this person put out a tweet that seems racist and then that person starts getting death threats and gets fired from her job and all of these bad things happen because the one bad thing that you did gets shared widely and that there's a record of becomes the representation of you as a person to the internet. And so we have all these digital traces but it's really hard for us as humans to process those and it's just one more example. We're doing a project now looking at people on Twitter who have admitted that they got a DUI and we're looking at kind of what sorts of things do they say and can you check if they're kind of changing their ways or whatever. And so I had my students presenting this week, here's the people we found who said they had DUIs and here's this guy who got a DUI and then the student was like actually he seems like a really good guy. Here's the stuff with the baseball team he volunteers for and here's the things with his kids and I was like oh we have to be morally ambiguous like we can't just hate him because he got a DUI and admitted it like there's all this other good stuff and we're so used to kind of seeing these digital traces and making our own inferences like oh because this is there, that's a bad person or that's a good person and actually we're all very complicated people and we all do bad things and good things but we're not great at judging it when we have a full record of people and I think that that's a problem that comes with all this is that we have these we don't forget and things don't fade everything is there and we have a hard time dealing with that. Algorithms can kind of deal with it a little bit better or we can program them too but as humans we have a hard time handling that. Goh, we also take computers to have access to truth in a way that we don't take poetry to for example so I mean to kind of come back to this poetry business if the purpose of poetry is to defamiliarize words then the purpose of algorithms is to defamiliarize computers they show us how computers work and they don't work kind of or they work badly or they work in this very wonky strange way and you know you see it when you go to Amazon you see that you ordered some button cell batteries because you needed two of them and then it's like oh perhaps you'd like these other button cell batteries and you know but I see what you're doing I see the caricature that you've built of me and ha ha that's interesting but then we flip that on its head and we're like oh actually this is truth right, Amazon knows something about me Google knows something about me that's true and therefore I can know something about you by seeing the way that Twitter or Facebook or whatever is representing you to me whereas we don't do that we tend not to do that with poetry if you wrote you know here's your book of high school poems oh well yeah that's a sort of caricature of you at a particular ha ha ha we'll look at that and then put it aside and understand that you as an individual are more than that set of words right So if I can give you a quick example on that my dissertation work was on computing trust between people online so if we didn't know each other could I guess how much I'd trust you and I was presenting this so this is like 2004, 2005 so early in the social media space and I was giving this talk like yeah you know we tell if our algorithms are good because like you'll say how much you trust me and then I'll compute it and I'll compare what the algorithm said to what you did and I would get these answers from these older computer scientists who are like well if the algorithm says you know on a scale of one to ten you should trust me at a three but you said a seven maybe you're wrong right like the the algorithm says a three so that's probably right like as opposed to all of our personal history of interactions letting you make this very human judgment like oh but the algorithm says a three so maybe you human are wrong where it's super interesting to think well what does the computer think about me but not so interesting to think I absolutely trust the computer to make decisions about me yeah I think that battle of trust is really interesting and the ways in which we now are the space of human agency and the space where we're of shared agency where we're sort of collaborating with computational systems and then the space where we just sort of trust the computer to do something those are all moving around in really interesting ways for example now I find myself questioning my frequently blindly obeying the instructions of you know Google directions about which way I should drive home and then sometimes questioning my pathetic slavishness to this system that obviously doesn't get it right all the time and then pausing because of who I am wondering to what extent I'm just a guinea pig for them to figure you know to continue testing you know that I'm that this isn't actually the fastest route this is just that I'm in you know test group B to see whether that road is a good road so what the this poses a question I think also comes out of the Cathedral of Computation Ian that we need to learn how to you not so seeing is one metaphor I also tend to think of it in terms of literacy and learning how to read these systems so how do we begin to read the cultural face of computation yeah no it's a great it's a great question it's an important problem so the common answer let's start there is this sort of everyone learns to code nonsense that's been making the rounds which isn't it's not I mean I call it nonsense just to set the stage right but it's not a bad idea you know why not it seems like it's reasonable to be exposed to how computers work and you know to some extent you learn some music you learn some computing great but you know really the reason to do that is not to not so that you can become a programmer but see you can see how broken computers really are and you put your hands on these monstrosities and just like anything you know they they don't work the way you expect and there's this library that's out of date and some random person was updating it but now they're not anymore it's the interface with this system whose API wasn't who knows how it works anymore and you know once you kind of see the messiness of the catastrophic messiness of actual working computer systems then it's not that you trust them less that you know now we now we can unseat their you know their revolt against humanity nothing like that but rather brings them down to earth again you know but you know in addition to that the way that we talk about these systems and the fact that we that we talk about them that we talk about them more is also important that you know that moment with Amazon is a literacy it's a moment of literacy it's a moment of of you as an ordinary person recognizing okay you know I see the way that Amazon is thinking that it has knowledge and then you know working with that and thinking about it and talking about it that kind of literacy is just as maybe even more important because it's right there on the surface and we can and we can read it and then I think there's a third kind of of of literacy that's important to culture which is the way that we discuss these subjects in in the media it really does matter and you know the more that we present the algorithm as this kind of God when we write about it especially for a general audience and the more we don't do our jobs of explaining what's really going on and how did you know a particular subsystem of a computational element of a very very large organization that has all sorts of things happening we do a disservice to the public with that respect. You know I agree with everything you said and I think this literacy of just being able to understand like what we know and what we don't is so critical because when I talk about this artificial intelligence that I do it's completely unsatisfying whether I'm writing about it or if I'm talking to people to say you know what we did is we took all this data and we put it in this black box and we basically have no idea what goes on in there and it spits out the right answer and we kind of know it will do that in predictable ways but we can't tell you what it's doing on the inside we spent a couple decades researching that and we can't. That's a completely not exciting article so what we do is we say we put your stuff in this box and maybe a black box and it spit out this answer and look here's some stuff that we kind of computed intermediately that sounds like it's some insight that make you feel like you're getting a story so the example that I use most is so we take your Facebook likes and they put them in this black box and it can predict how smart you are and that's not too satisfying and so we say yeah and if you look at it here's the things that you like that are indicative of high intelligence liking science and thunderstorms and curly fries and everyone goes curly fries and then when I talk about it especially like market researchers people get really angry how can you know that's gonna be true and it's gonna change and it's like I'm just telling you that for a story like we don't use that we don't care about that it's not part of the computational picture but it allows us to tell a story that makes it feel like there's something human going on in there and that is a struggle for me because you wanna tell this story here's what these algorithms do and it's unpredictable and crazy but you can't tell a story with just like black box spits out answer yeah we can reframe that story but I don't know this is the best example but it's a kind of information derivative illustrating that you're doing which I mean I don't know that that's the way to talk to the every person about the example that you but it doesn't have to be reframed as computation there are other touch points we have in the world we kind of like you know how there's infrastructure there are all these highways and you didn't build them but they were here before you you know there are certain computational systems that are there before us and we come to them and we actually have no idea how they work we literally no idea so the work of explaining of explaining how computational systems work that doesn't rely on this this appeal to mysticism I think is super important yeah I think this question of storytelling is really important excuse me not only because this is all an elaborate ploy for me to do research on my book project about algorithms but also because humans aren't storytelling animals and storytelling is essentially a process of exclusion right it's selecting the telling example that may or may not represent the broader history but you have to find the examples in order to tell a story because humans aren't going to sit down and read the phone book right we're not going to sit down and read the database and so my question is how do we grapple with with storytelling as is storytelling a fundamentally different way of knowing than what we might think of as computational knowledge or you know when you're talking about computational approaches is the process of inclusion right we want to include as much data as possible to make the the data set as rich as possible so the solution will be more you know more complete is that a totally alien way of knowing are there ways to to bridge that divide I mean it's so hard right for the computers you absolutely want to give it everything and then when you're talking about what the computers do you know generally when you're working with this huge amount of data which is the exciting thing now you're ending up with not logical insights but statistical insights and any human can look at the connections that are formed and go that doesn't make any sense to me except that it tends to work most of the time right and so we want to tell a story that says here's some statistical insights and let me tell you a few but that doesn't really give a picture and it's hard to give a picture of here's how statistics work and little patterns emerge as important from this big mass of data it's a story that I try to tell all the time but people I have found latch on to the specific examples and have a hard time grasping the bigger thing and I think in terms of you know computer literacy that that is so much more important than being able to program like programming is great and you will see what a mess it is but being able to grasp that this is a statistical insight and the individual example doesn't matter like that's the thing that I would like to be able to do better yeah I mean computers are more like marionettes or like you know table saws or something than they are like like stories you know these machines that produce things and you design this machine such that you can then design things for the machines you have your table saw and you make a bunch of jigs you can get the right cut and the right or you know you build this puppet then you have to kind of manipulate it in this perverse way that you can't really even explain in order that it produces a you know an effect that appears to give life to the creature you know the it's a different way of thinking in the sense that whether it's a story or whether it's an outcome or you know a business result or whatever it is that the particular computational system is doing it's not doing deliberately and it's not doing in a singular way it's a system that's been designed to produce many similar kinds of outcomes and it's kind of a weird way of thinking about thinking about behaving in the world especially since we ordinarily think in and talk in specifics in stories in examples in individuals and that's also still how we write about everything including including computation and you see this when you see like computational arts and you see the aesthetics of computation if you you know look at twitter bots or generative text or any kind of generative art and you know the results are terrible when compared with hand craft crafted storytelling or you know humor on twitter what have you what's remarkable about them is not there not their individual utterances or individual effects but that there is some system producing many of them and when you look at it holistically you can appreciate it in a different way kind of getting that aesthetic that aesthetic ability right I mean we talk about ethics a lot when it comes to industry and to computing we don't talk about aesthetics enough like one other way into this literacy problem is through aesthetics understanding how computers produce results on the artistic register right even if we kind of hate those results we can't recognize them as art and say actually something just like that is happening inside of Facebook or inside of Google I think that notion of aesthetics is really important because I think it's one of the ways that we can confront very inhuman or very alien ideas and systems and methodologies without necessarily having the language to articulate what it is right aesthetics can be a nonverbal way of engaging with these questions so I think there's a connection between aesthetics and what you referred to as illusion before as well and so my question for you both now is do we are the illusions necessary or we could talk about it as that kind of that faith and maybe it's a bankrupt faith or a misplaced faith but is that something we have to have is that the only way that humans are going to interact with these systems? No, it's a starting point it's the thing you do when you don't have better options and then you realize oh, this is insufficient and this is a good starting point to kind of, and then you recognize also the intrinsic flaws of the illusion and you seek more knowledge and deeper understanding and then you realize oh, this is sort of be demysticized now and you can do this historically maybe that's one concrete example of something that we can go back and unpack any historical computing system and see kind of the bizarre reasons why it was constructed in the ways that it was you know what it did how it had an influence on later systems and then you was oh, okay this is just like anything else The Atari for example The Atari for example, yeah, I've written a book in The Atari that tries to do exactly this and so the computing history has a role to play here and as a kind of very quick aside on that matter computer science as a discipline is one of the most a historic that I know of just completely uninterested in history because there's a barreling forward making that last algorithm slightly more efficient or so it can do something slightly different Yeah, I mean I think your marionette example that you gave like I've never heard that example before but I think it's so spot on and gets to all of these issues that we're talking about because if you're watching this marionette perform that that's one thing that you can see and then if we try to explain it oh, if I pull this string, this thing happens we can have all of these debates about why does that thing happen and well why isn't it this thing and can't you do it this other way but that's different than the thing that is being produced for you to look at and which of those conversations do we wanna have like maybe both but they're two really different conversations and I think that's part of the struggle that as a computer scientist I always wanna talk about both look at this amazing thing that you can see that it's doing and then also here's all these crazy things that make that work but it's really two different stories and I find it's hard to say here you pull the string and this happens and people say but how do you get this big complex thing at the end, right and it's just too complicated to tell it all the way through yeah, there's a lot of strings Yeah, I think that this sort of unanswerable question about whether it's really a marionette unless you're seeing that complexity at the end, right and that's the thing that you focus on which I think is about aesthetics and kind of notions of performance or when an algorithm or a system becomes a cultural thing so we just have a couple of minutes left so what would be some just to sum up a couple of practical things that you would suggest if somebody wants to sort of actually understand algorithmic systems better Oh gosh, that's so hard so coming back to a point that you raised before about algorithms is poetry or algorithms is beautiful things I've absolutely had that thought, right that I've looked at algorithms and I've gone whoever wrote this had this new insight to the problem that I didn't have you can learn about algorithms without having to learn about computer science and so I guess if someone wanted to do that someone wants to like I don't really know anything about computer science I just want to start getting in to see what that is that you might start with some kind of basic tutorials on the Turing machines you mentioned Alan Turing at the beginning and he kind of put forward this fundamental notion of all computer science that says you can have a piece of paper and a little basically a pencil that can write a one or erase a one and that can represent all computers everywhere and you spend a lot of time as an undergraduate doing that but it can get very complicated but it is an accessible concept and I think if you spend a couple hours playing around with that and seeing how you can do actually sophisticated math and all kinds of interesting things with this really simple machine it starts to give you an insight into the process that we use to develop these much more sophisticated algorithms it won't help you figure out all of the strings and the training that you need to manipulate those strings in the right way to get the picture but it starts to help you see like okay, like these algorithms it's not this mythical thing it's like a bunch of people who were beating on this really hard problem who kind of manipulated it into doing the thing so I think it's a starting place for learning how the algorithms work it won't get you into all the complex algorithms but it gets you in the space of thinking about them in the right way Yeah, I mean, computing history I think is what we're both pointing at if we're living in this deeply computational age where computers are inside of and running so much of our lives maybe we should know where they came from Thank you both so much, that was great Thank you