 So first off, if you have any trouble seeing my slides or hearing my voice, or if you just like spoilers, you can get a PDF with both my slides and my script at tinyurl.com. I'll leave that URL up for a minute. I also wanted to just mention that I'm going to keep my hat and my sunglasses on for this talk because my eyes are very light sensitive and I'm going to get a nasty headache if I stand here like squinting out at everybody because of these lights. And pain hurts me. Let's see. The last thing I want to do is borrow you for just a moment, if I may. Everybody say Ruby friends. Thank you. I'll tweet that later. Also I work for Living Social. We are hiring. In addition to sponsoring both the conference and the opportunity scholarship program, we also brought about three dozen of these little squishy orange stress toys that look like little brains. So if you want one of these, come find me later. Programming is hard. It's not quantum physics, but neither is it falling off the log. And if I had to pick just one word to explain why programming is hard, that word would be abstract. As software developers, we deal in abstractions. I asked Google to define abstract and here's what it said, existing in thought or as an idea, but not having a physical or concrete existence. Usually I prefer defining things in terms of what they are, but in this case I find the negative definition extremely telling. Abstract things are hard for us to think about precisely because they don't have a physical or concrete existence. I got the idea for this talk when I was listening to the Ruby Rogues podcast episode with Glenn Vanderberg. And I edited this down a little bit for length, but basically what he said then was the best programmers I know all have some good techniques for conceptualizing or modeling programs that they work with. It tends to be a spatial or visual model, but not always. What's going on, he says, is that our brains are geared towards the physical world and dealing with our senses and integrating sensory input. But the work we do as programmers is all abstract. And it makes perfect sense that you would want to find techniques to rope the physical sensory parts of your brain into this task of dealing with abstractions, but we don't ever teach anybody how to do that or even that they should do that. When I heard Glenn say this, I got really excited and I started thinking, yeah, brains are awesome. And we should be teaching people that this is the thing that they can do. And then I thought, wait, no, brains are horrible and they lie to us all the time. And teaching this stuff would be completely irresponsible if we didn't also warn people about cognitive bias. And then I thought about an amazing hack that we simply will not find unless we are actively working to counter our biases. We'll get to that later. Our brains are extremely well-adapted for dealing with the physical world. Our hind brains, which regulate temperature, respiration and balance, have been around for about half a billion years or so. But when I write software, I'm leaning really hard on parts of the brain that are relatively new in evolutionary terms. And I'm using some very expensive resources to do it. Over the years, I've built up a small collection of shortcuts that helped me engage some of the specialized structures that my brain has evolved over the millennia. First off, there are a lot of visual tools that let us leverage our spatial reasoning skills. I'm just going to list a few examples because I think most developers are likely to encounter these tools either in school or on the job. And they all have the same basic shape, their boxes and arrows. There are entity relationship diagrams, which help us understand how our data is modeled. We often draw diagrams to describe data structures like link lists, binary trees, and so on. And for state machines of any complexity, diagrams are often the only way to make any sense of them. I could go on, but like I said, most of us are probably used to using these kinds of tools, at least on an occasional basis. Now, diagrams, I think, are great because they let us analyze systems that are much larger than things that we could hold in our heads all at once. They let us associate ideas with things in space, and our brains have a lot of hardware support for keeping track of where stuff is. This lets us free up some room in our working memory, which is relatively small. Also, another thing about this is that our brains are really good at pattern recognition. So using diagrams to visualize our designs can give us a chance to spot certain kinds of problems before we ever even start typing code in an editor just by looking at their shapes. Here's another way to use your spatial skills when you're actually working with code. This one's called the squint test. You can use this to get oriented in a new code base or to zero in on high-risk areas of code. It's pretty simple. You open up some code, and you either sort of stand back and squint your eyes at it, or you make the font size smaller than you can comfortably read. The idea is to look past the words and notice things about the shape of the code. Here are a few things you can look for. Is the left margin ragged with lots of nested control structures? Are there any ridiculously long lines? Are there areas where code may be formatted into vertical columns, possibly indicating that you have tables represented in your code somewhere? Is the file mostly regular, but with one in early area, or is it just a mess from top to bottom? And what does your syntax highlighting tell you? Do certain colors cluster together, or is there a color that spreads itself out on a regular basis indicating that there's a pattern you're missing? There's a lot more stuff that you can play with, but those are a few things off the top of my head. I also have a couple of techniques that involve the clever use of language. The first one is very simple, but it does require a prop. You don't need the big one, though. You can use the desktop edition. Here's how it works. You keep a rubber duck on your desk, and when you get stuck, you pick up the rubber duck, you put it on your keyboard so that you can't type. Then what you do is you explain your problem out loud to the duck. And it sounds silly, but there's a good chance that in the process of putting your problem into words, you'll either realize what's wrong, or at least think of something else to try. And this saves you from interrupting a coworker and half-explaining the problem to them and then saying, oh, hang on, and running back to your keyboard. One of my coworkers, by the way, has a really interesting variation on this technique, which is to start writing an email describing the problem. And I like that as a different variation, because I find that I think differently when I'm writing than I do when I'm speaking. The other linguistic hack that I have, I got from Sandy Nets. In her book, Practical Object-Oriented Design in Ruby, she describes a technique that she uses to figure out what object a method should belong to. And she says, quote, how can you determine if the gear class contains the behavior that belongs somewhere else? One way is to pretend that it's sentient and to interrogate it. If you rephrase every one of its methods as a question, asking the question ought to make sense. For example, please, Mr. Gear, what is your ratio? Seems perfectly reasonable. Well, please, Mr. Gear, what are your gear inches on shaky ground? And please, Mr. Gear, what is your tire size? It's just downright ridiculous. This is a great way to evaluate objects in light of the single responsibility principle, and I'll come back to that in a moment. But first, I described the rubber duck, and please, Mr. Gear, as techniques to engage linguistic reasoning, but that doesn't really do them justice. Both of these tools force us to put our questions into words, but words themselves are tools. We use words to communicate our ideas to other people. So while these techniques do involve the language centers of our brain, I think they go beyond language to tap into our social reasoning. The rubber duck technique works because putting your problem into words forces you to organize your own understanding of the problem in such a way that you can verbally lead somebody else through it who doesn't have all of your implicit history and context. And by anthropomorphizing an object and talking to it, please, Mr. Gear, let's us discover whether that object conforms to single responsibility principle. To me, the key phrase in Sandy's description of this technique is asking the question ought to make sense. Most of us have an intuitive understanding that it might not be appropriate to ask Alice about something that's Bob's job. Interrogating an object, as though it were a person, it helps us to use that social knowledge. It gives us an opportunity to notice that answering this question isn't really the job of any of our existing objects, which in turn prompts us to create a new role and give that role its own name. Enough of the really hand-wavy stuff. Metaphors can be a really useful tool in software. The turtle graphics system in Logo is a great metaphor. Most of the rendering systems that I've used are based on a Cartesian coordinate system with XY pairs and that's all very formal, but Logo encourages the programmer to imagine themselves as the turtle and then use that understanding of being in a body to figure out what to do next. One of the original creators of Logo called this body-syntonic reasoning and specifically designed it to help children solve programming problems. But the turtle metaphor works for everybody, not just kids. Cartesian grids are great for drawing boxes, mostly, but it can take some very careful thinking to figure out how to compute the set of XY pairs that you need to draw a spiral or a star or a snowflake or even a tree. Choosing a different metaphor can make certain kinds of solutions easy, solutions easy, where before they seemed like too much trouble to be worth bothering with. James Ladd has a couple of interesting blog posts about what he calls east-oriented code. Imagine a compass that's overlaid on top of your screen between you and your code. In this model, messages that an object sends to itself go south and any data returned from those calls goes north. Communication between objects is the same thing, but rotated 90 degrees. Messages that are sent to other objects go east and return values flow west. Now what James Ladd suggests is that in general, code that sends messages to other objects and lets them figure out how to deal with it, which is to say code where information flows east, is easier to extend and maintain than code that gets data back and decides what to do with it, which is to say code where information flows west. Now really this is just the design principle tell, don't ask, but the metaphor of the compass recasts that principle in a way that lets us use our background spatial awareness to always be mindful of it. And last up, code smells are an entire category of metaphors that we use to talk about our work. In fact, the name code smell itself is a metaphor for anything about our code that seems off or hints at a design problem further down. And I guess that makes it a metaphor or maybe not. There's a long list of these things and a lot of code smells have names that are extremely literal. They're things like duplicated code, long method and so on. But some of these names are delightfully suggestive, feature envy, refused bequest, primitive obsession. And to me, the names on the right here have a lot in common with please Mr. Gear. They're explicitly chosen to hook into something in our social awareness to give a name to a pattern of dysfunction and by naming the problem to suggest a possible solution. These are some of the shortcuts that I've accumulated over the years and I hope that this can be the start of a similar collection for some of you. Now for part two, evolution has designed our brains to lie to us. Brains are expensive. The human brain accounts for just 2% of body mass, but 20% of caloric intake. That's a huge energy requirement that has to be justified. Evolution does one thing and one thing only. It selects for traits that help an organism stay alive long enough to reproduce. Evolution does not care about getting the best solution. It only cares about getting one that's good enough to compete in the current landscape. And evolution will tolerate any hack as long as it meets that one goal. For example, to illustrate, let's talk about how we see the world around us. The human eye has two different kinds of photoreceptors. There are about 120 million rod cells in each eye. These play little or no role in color perception and they're mostly used for night and peripheral vision. There are also about 6 or 7 million cone cells in each eye. These require a lot more light and they're what let us see in color. The vast majority of the cone cells are packed together in a little cluster near the center of the retina. And this area is what we use to focus on individual details and it's smaller than many people think. It's actually only about 15 degrees wide. As a result, our vision is extremely directional. We have a small central area of high detail in color but outside that our visual acuity and also our color perception drop off actually pretty fast. So when we look at a scene like this, our eyes actually see something like this in approximation. So in order to turn the image on the left into the image on the right, our brains are doing a lot of work that we are mostly unaware of. We compensate for having such highly directional vision by moving our eyes around a lot. Our brains then combine the details from these individual points of interest to construct a persistent mental model of whatever we're looking at. These fast point-to-point movements are called saccades and they're actually the fastest movements that the human body can make. The shorter saccades that you might make when you're reading typically last for 20 to 40 milliseconds and the longer ones might take up to 200 milliseconds or one-fifth of a second. What I find so fascinating about this is that we don't perceive saccades. During a saccade, the eye is still sending data to the brain but what it's sending is a smeary blur as it moves from point to point so the brain just edits that part out. This process is called saccadic masking and you can see this effect for yourself. Next time you're in front of a mirror, lean in close and look back and forth from the image of one eye to the other. You will not see the reflections of your eyes move. As far as we can tell, our gaze just jumps instantaneously from one point to the next even though when you think about it that's not actually possible. Now I hope you like conspiracy theories because I'm about to give you a really good one. When I was preparing this talk, I found this wonderful sentence in the Wikipedia entry on saccades. It says, do the saccadic masking. The eye brain system not only hides the eye movements from the individual but also hides the evidence that anything has been hidden. Put that in your tinfoil hat and smoke it, right? So our brains lie to us and then they lie to us about having lied to us and this happens in your occipital lobe multiple times a second every waking hour every day of your life. Basically, if you have your eyes open, this is going on. Of course, there's a reason for this. Imagine if every time you shifted your gaze around you got distracted by all the pretty colors you would be eaten by lions. Oh, I forgot to animate that, sorry. But in selecting for this design, evolution made a trade-off and the trade-off is that we are effectively blind every time we move our eyes sometimes for up to a fifth of a second and we might still get eaten by lions because of this but not as often. Okay, so I wanted to talk about this partly because it's just a really fun subject and I like messing with your heads. But also to illustrate how our brains are doing a massive amount of work to process information from our environment and present us with an abstraction. And as programmers, if we know anything about abstractions, it's that they are hard to get right, which leads me to an interesting question. Does it make sense to use any of the techniques that I talked about earlier to try to corral different parts of our brains into doing our work for us if we don't know what kind of shortcuts they take? According to the Oxford English Dictionary, the word bias seems to have entered the English language in about the 1520s. It was borrowed as a technical term in the game of lawn bowling and it referred to a ball that was made in such a way that it would roll in a curved path instead of in a straight line. Since then it's picked up a few additional meanings but they all have that same basic connotation of something that's skewed or off. Cognitive bias is a term for systematic errors in human cognition. Patterns of thought that diverge in measurable and predictable ways from the answers that pure rationality would give. When you have some free time, I suggest you go and have a look at the Wikipedia page called List of Cognitive Biases. There are over 150 of them. This is only through the Gs that would fit on a slide. And they're a fascinating reading. Every one of them is like, you learned something great about your brain that you didn't realize. But this list of Cognitive Biases, it has a lot in common with the list of code smells that I showed earlier. Most of the names are very literal but there are a few that stand out like the curse of knowledge or the Google effect. But I think the parallel goes a lot deeper than that. This list gives names to patterns of dysfunction. And once you have a name for a thing, it's much easier to recognize it and then figure out how to address it. I do want to call your attention to one particular item on this list. It's called the bias blind spot. This is the tendency to see oneself as less biased than other people or to be able to identify more cognitive biases and others than in oneself. Sound like anybody you know? And I bring this up because there's a big part of tech and geek culture that glorifies rationality. We often want to see ourselves as beings of pure logic. And I hate to break it to you but ain't none of us Mr. Spock. And even Spock himself would bend or break the rules when it suited him and just make up some bullshit excuse later. As humans, we are all biased. It is built into us. Pretending that we aren't biased only allows our biases to run free. I don't have a lot of general advice for how to look for bias but an obvious and necessary first step I think is to ask the question how is this biased? And that first word is crucial. If you only ask is this biased? It's way too easy to let yourself go. Seems fine. There is always a bias and your job is to figure out what it is. Asking this question is a great way to start. It gets you most of the way I think. But beyond that, I suggest that you learn about as many specific cognitive biases as you can. That long list I showed earlier. So that your brain can do what it does which is to look for patterns and classify things and make associations so that when you catch yourself saying certain things you'll go oh wait that indicates this particular bias is in effect. Because if you're not checking your work for bias if you're not actively countering it you can look right past a great solution and you will never know that it was there. I have an example of such a solution that is simple and elegant and just about the last thing I ever would have thought of. We're going to talk a little bit about Pac-Man. If you've never played it for those of you who should get off my lawn there is a very simple game where you run around a maze trying to avoid four ghosts. And playing games is fun but we're programmers and we want to know how things work. So let's talk about programming Pac-Man. For the purposes of this discussion we're just going to consider three things the Pac-Man, the ghosts, and the maze. The Pac-Man is controlled by the player so the code for that is mostly just handling hardware events. It's boring. And the maze is just there so that the player has some chance to avoid the ghosts. Boring. The Ghost AI that's what's going to make or break the game and that's where we get to have a little fun. To keep things simple let's start with one ghost. How do we program one ghost to move around and chase the Pac-Man? We could choose a random direction follow it until we hit a wall and then choose another random direction. This is very easy to implement but it's really not much of a challenge for the player. We could compute the distance to the Pac-Man in the x and y axes and pick a direction that makes one of those smaller but then the ghost if that's all we do is going to get stuck in corners or behind walls. And again this is going to be too easy for the player. Okay so instead of minimizing linear distance we can look at topological distance. We can compute all possible paths through the maze to the Pac-Man. Pick the shortest one and start down it. Then next tick you've moved and the Pac-Man has moved so you do it all again. Now this works fine for one ghost but if all four of the ghosts use this algorithm they're going to wind up chasing after the player in a tight little bunch instead of fanning out. Okay so we have each ghost compute all possible paths to the Pac-Man and reject any path that goes through another ghost. So this now means that our ghosts have to keep track of each other as well as the Pac-Man but this seems still fairly doable right? Quick show of hands. How many people when presented with this problem would approach it more or less the way I just walked through it? I know I certainly would. Thank you for being brave enough to admit that. So how is this biased? The best way that I have to explain the bias in this solution is to walk you through a very different one. In 2006 I attended UPSLA as a student volunteer and I happened to sit in on a presentation by Alexander Repenning at the University of Colorado. And in his presentation Professor Repenning walked through the Pac-Man problem and then presented this idea. What he says is you give the Pac-Man a smell and then you model the diffusion of that smell throughout the environment. Now in the real world smells travels through the air but we don't need to model each individual air molecule. What we can do is we can divide the environment up into reasonably sized logical chunks and then model the average concentration of scent molecules in each chunk. And you remember when I said the maze was boring earlier? That was a bit of a lie. As it turns out the tiles of the maze already divide up the environment for us. They're not really doing anything else so we can borrow them as a convenient container for this computation. Here's what we do. We say that the Pac-Man gives whatever floor tile it's standing on a Pac-Man smell value of say a thousand the number doesn't really matter. And that tile then passes a smaller value off to each of its neighbors and they pass an even smaller value off to each of their neighbors and so on. I iterate this a few times and you get a diffusion contour that we can visualize as a hill with its peak centered on the Pac-Man. It's a little hard to see here but the Pac-Man is at the bottom of that big yellow bar. So we've got the Pac-Man and we've got the floor tiles that are passing the Pac-Man smell around to each other. But to make it a maze we need some walls. We give the wall tiles a hard-coded Pac-Man smell value of zero and this chops up the hill a bit and now all our ghost has to do is climb the hill. We program our ghost to sample each of the floor tiles next to it. Pick the one with the biggest number and go that way. This barely seems worthy of being called an AI but the really cool part about this is that when we add more ghosts to the maze we only have to make one change to get them to cooperate with each other and interestingly we don't change their movement behaviors at all. Instead we have the ghosts tell the floor tile that they're standing on that its Pac-Man smell value is zero. This effectively turns the ghosts into moving walls so that when one ghost cuts off another one the second ghost will automatically choose a different path that re-routes. This lets the ghosts cooperate without even having to be aware of each other. Halfway through the conference session where I saw this I was like wait what? At first I was just really surprised by the simplicity of this approach but then what really messed with my head was the realization that I never would have thought of this. Now I hope that looking at the second solution makes it a little bit easier to see the bias in the first one. For most of us our first instinct is to imagine ourselves as the ghost and then figure out what we would do. This is the body of syntonic reasoning that's designed into logo and in this case it's a trap because it leads us to solve the pursuit problem by making the pursuer smarter. Once we've started down that road it's very unlikely to occur to us to consider a radically different approach even if and perhaps especially if it's a much simpler one. Body synthenicity biases us towards modeling objects in the foreground rather than objects in the background. Now does this mean you shouldn't use body syntonic reasoning? Of course not it's a tool it's right for some jobs it's not right for others. Want to take a look at one more technique that I listed in part one. What's the bias in please Mr. Gear what is your ratio? Well it's andro centric for one but more interestingly this technique is explicitly designed to give you an opportunity to discover new objects in your model. The trap in this technique is that it requires a name and names have gravity. Because our brains are associated the new objects that you discover with this technique will very probably acquire names that are related to the names you already have. What I'm trying to get out here is this question how many steps of asking this question does it take to get from please Ms. Pac-Man what is your current position in the maze to please Ms. Floortile how much do you smell like Ms. Pac-Man? For a lot of people the answer is probably infinity. And my guess is that you don't come up with this technique unless you've already done some work modeling diffusion in some other context it's a thing that's already part of your repertoire. And that is why I like to work on diverse teams by the way. The more different backgrounds and perspectives that we as a group all have access to the more chances we have to find a novel application of a seemingly unrelated technique like this one. It can be exhilarating and very empowering to find these techniques that let us take shortcuts by leveraging specialized structures in our brains but those structures as we've seen themselves take shortcuts and if you're not careful they can lead you down a primrose path. Here's that quote from the beginning that got me thinking about all of this in the first place about how we don't ever teach anybody that they should do this. Ultimately I think despite my reservations about bias we should use techniques like this. I think we should share them and I think to paraphrase them we should teach people that this is a thing you can and should do and I think we should teach people that looking critically at the answers these techniques give you is also a thing that you can and should do. We might not always be able to come up with a radically different and simpler approach but the least we can do is give ourselves the opportunity to discover these things just by asking how is this biased. I want to thank everybody who helped me with this talk or with the ideas in it and now I don't think I have time for questions so I will talk to you all later. I look forward to hearing your ideas as well. Thank you.