 So I'm going to go ahead and get started. My name is Jay Zashin. I'm a developer partner at a company down in Denver called Bit Theory. We're a little development shop that likes Gabe. A little development shop doing a lot of Rails work and a lot of mobile work. When I'm not doing Ruby and JavaScript, I spend a lot of time running. So this is me a couple weeks ago, but a half marathon in Vale. I'm normally not this blurry in real life or not that blurry in real life, but I have been drinking. So that's certainly possible. So let's jump into the matter at hand. So the thing that got me thinking about this talk or that sort of gave me the idea for this talk originally was this quote from Abelson and Sussman. And out of curiosity, how many folks in the audience have read or even heard of the sick people, the structure and interpretation of computer programs? Okay, awesome. So more people than I would have thought. For those of you who haven't read it, it's a great book. And I think as of a couple years ago, it's totally free and available online. It was the courseware for the MIT intro to computer science class for some number of years. I think it's like 30 or 40 years. So it's been around forever, but it has some really interesting introductions to programming concepts and all that, sort of from a beginner level. But anyway, one of the great quotes from that book is that programs must be written for people to read and only incidentally for machines to execute. Has a little bit of controversy and I'm not gonna go deeply into the philosophical stuff behind it, but suffice it to say that there's more than one path for optimization of a program, right? Runtime optimization is not the only optimization that you need to be worrying about. So it's the non-runtime stuff that I'm gonna be talking about today. So more generically, what I'm talking about is metacognition, which is exactly what it sounds like. It's learning more about how the process of cognition works. And I'm interested in it specifically from the lens of how can we hack cognition, right? How can we get a better idea of what it's like? How can we control it? How can we sort of get a better grasp of our own cognitive processes? So starting out with the disclaimer, I'm not a psychologist, although I did study psychology in school. I'm skipping some level of scientific rigor just because it's a short talk. So if that bothers you, then I'm sorry, but come up and talk to me after and we can talk more about studies and all that behind it. So first thing I wanna talk about is memory and specifically how memory operates. So there are three functions essentially of memory. There's encoding of sensory data, there's storage of that, and there's retrieval of that at some point in the future. And it turns out incidentally that we're much better in that retrieval component at recognition than arbitrary retrieval. If you're anything like me, if you think back to middle school and elementary school where you had some tests that were multiple choice and some tests that were free response and you always hated the free response ones because you could never sort of gain the system and figure out which one was probably it if you hadn't studied, all that sort of thing. Yeah, so it turns out that our brains are wired for that so it's not your fault. One specific kind of memory that's especially useful is working memory, also known as short term memory. It has a few different components, there are some different theories about how it operates, all of them sort of involve an executive component, a buffer of time, a buffer of senses and then sort of a visual spatial sketch pad where you can move things around in your mind's eye. That sort of thing. The more interesting stuff about it, generally speaking for pretty much everyone, the size of working memory is limited to seven plus or minus two items. It's a very finite size and it's actually very consistent across people which isn't something you'd necessarily expect. You can stretch this a little bit with some tricks. One great example is chunking. So what you do with phone numbers for example, a phone number you'd normally be remembering 10 digits which is sort of outside of this range if just barely. But the way that you remember a phone number, if I was to try and remember 303-555-1212, something like that, when you're encoding that, you actually remember the area code and the prefix, not as three separate numbers for each but as a chunk. So instead of remembering 10 items for all of those, you're remembering the chunk 303, the chunk 555 and then probably those other four numbers 1212. So instead of storing 10 items or actually storing six items. So another interesting thing about working memory is that it's also very tied to your auditory system. Turns out that the way that your brain encodes memories is actually basically based on an audio track. So there's a very strict limit to this with regards to the amount of pronunciation. So for example, going back to that seven plus or minus two items thing, if I asked you, if I gave you seven countries to remember and they were countries like Chad, Mali, China, France, India, all sort of one or two syllable names, you probably wouldn't have too much trouble with it. If I gave you the same or maybe even fewer countries that have much longer names, if I'm telling you to remember Kyrgyzstan and Luxembourg and Liechtenstein and a bunch of countries like that, you're probably not gonna be able to pronounce those in a second and a half. So you're actually gonna have a much harder time remembering those for that reason. Another couple of interesting things about working memory, so scanning it, if you're trying to figure out, if you have something in working memory, I give you a list of seven countries and I ask you, is this country in that list? It's a serial exhaustive process. Your brain isn't very efficient at that. So you algorithm nerds know exactly what that means. For those of you who aren't so big on algorithms, it means that you're gonna hit every single item in your working memory set even after you've already found the one that you're looking for, which is sort of a bummer. It's definitely an inefficient process. In addition, it takes 38 milliseconds to scan each one of those items just to sort of touch it and figure out if it's there, not even to do anything on it. Which doesn't sound like a lot, but if you consider the timescale that computers operate on versus where our brain operates, that's pretty incredible. Seven items working through at 38 milliseconds per item, that's a quarter of a second, more or less. That's a fair chunk of time. Quickly, just wanted to talk on long-term memory. So a long-term memory is super hierarchical. There are a couple different kinds of it. You have declarative and procedural memories, so what you know versus what you know how to do. Within declarative, it's divided further into experiences you've had, which is episodic memory. It's sort of your mind, VCR, and then semantic memory, which is facts. I know that dogs have fur, I know that two plus two is four, that sort of thing. So two things that interfere with memory are, or two things that can disable your memory are decay and interference. So decay is that sort of natural process of stuff falling out. I can't remember what I had for lunch two weeks ago, because that's not really something that matters. It's an adaptation to not fill your brain with stuff that you don't care about. Interference is sort of similar. It's designed to go after stuff that's proximate and relevant rather to what you're working on right now. Both of these can be detrimental to your long-term memory storage. One other quick thing about memory, the last thing before I move on to cognition, there are two interesting things about stories and pictures in particular. You've always heard the saying that a picture is worth a thousand words. Turns out a lot of the reason for that is that both pictures and stories engage our memories on multiple levels. So not only are you storing it, but you're also recording an experience or an emotional piece along with it in a form that encodes it in your memory in multiple ways. So it turns out that those experiences are a nice trick that we can use to pull things back more readily, and I'll talk about that a little bit later. So I'm gonna jump a little bit over to cognition and talk about that for a minute. People in the audience, how many folks have read Blink? Awesome, awesome. It's a great book for those of you who haven't. It's sort of pop psychology, but he's generally rooted in sort of the right field. So that's really fun. So what he talks about a lot is thin slicing, right? Which is a way of processing a ton of information around us. I couldn't resist putting this picture in here because it's just awesome. I started looking for drinking from a fire hose and this is like the best thing that came up. But that's an accurate representation of what our brains are doing on an everyday basis. You have, even just sitting here in this room, you have a ton of stimuli that are bombarding your brain. You have me talking, you have a visual stimuli from the projector, you have all the people around you, auditory, you have touch, you have taste, if you're eating something, all these other things that are sort of bombarding you. And if you're trying to process all this at a very high level, at a cognitive level, you're not gonna ever get anything done, right? If you're sitting there thinking through every decision that you're making, every tiny little piece, then it's constant decision paralysis. And if you think about it evolutionarily, it makes sense, right? If our ancestors were out there hunting wild buffalo and they get charged by one and they're sitting there thinking through the process, like should I dodge left or right? And by the time they actually make a cognitive decision on that, they're gonna get run over. So there's a strong evolutionary reason for that. And the way that we deal with that is by splitting that process. So there are actually two separate processes that we use to decide how to act in any circumstance on their attention and automation. So attention is the high level conscious thought process. That's what you're actually, when you're giving conscious thought to a decision, it's in the front of your brain. I think Susan's gonna talk about this a little bit more in the next talk, so I won't tread on her toes. That stuff that's happening at a very high level that you have control over, whereas automation is 1,000 other little processes that are happening behind the scenes that you really don't have the same level of knowledge or control over. So that's what I'm gonna talk more about. So within that you have two different kinds of processing that occur, two different levels of processing that occur, deep and shallow, and they can happen on a couple of different levels. They also enable multitasking. So it helps if you think of attention as a divisible resource. It's sort of infinitely divisible, right? You have any number of tasks that you're working on at any given time. Some of them are automatic, some of them are not. And all of them require some level of attention and based on what level of attention they require, they take up a different slice of that attention pie. So you can allocate that to a bunch of different places, but the overall expenditure, the overall size of the pie is the same regardless. Great illustration of this is called the cocktail party effect. So I'm sure we've all been in this situation, right? You're at a party with a whole bunch of people. There are 50 people there. There are 20 separate conversations going on. You're in the middle of your own conversation somewhere and all of a sudden someone says your name from across the room and just like out of the din, it's like clear as a bell, right? You hear your own name. You don't hear anything else in the conversation or you hadn't been hearing anything else in the conversation up until that point. You hear it there. It's called selective attenuation. It's a great example of exactly how that occurs. That's one of those background processes that's taking up a little piece of your attention at any given time. Another illustration of this is called the strub task. So take a look at this and then think to yourself how many characters are in each row. And I'm sure everyone sees the trick here, right? The last one, the row of threes actually has four items in it instead of three. And it's the sort of thing where it's not really gonna fool anyone, right? But if you're sort of going on autopilot, you actually have to engage your brain a little bit differently to figure out what's going on with that fourth one because their automatic processes fail you, right? Your trigger that these things are all gonna be the same is not functioning the same for that last item. You have to kick in a different level and engage more of your attention in order to actually make that processing work. So a couple other quick things sort of in that same vein, in terms of how we actually accomplish that quick processing. One major one is categorization or categories and schemata, which are essentially a way for our brain to organize what we know and what we've done into knowledge hierarchies, things that we can see moving forward and apply to future pieces. So it can go anywhere, sorry, remote's going crazy. They can be natural or learned so they can either be things that you're explicitly studying and trying to categorize, think flashcards or they can be things that your brain is sort of naturally putting together which tend to be much looser categories. It's more of a like a comparison than it has a. It's not a real strict system the way you would expect like a programming type system to be, but you still have a lot of those same pieces. And the difference in categories and schemata, categories are categorizing items, schemata are categorizing experiences. So it's more of a script for how you respond and act in a particular scenario or to a particular set of criteria. Another major one that we use is heuristics and this is really sort of the key piece. They affect us in both that fast and slow processing and there are a whole bunch of them that you're using constantly. They range from availability and recency so it's something recently in mind. Have I been thinking about this lately? Your brain thinks that, hey, if it's something that's sort of front of mind or it's a recent access then it's probably important and it's probably relevant to the situation. Familiarity is a similar one, representativeness sort of ties in with categorization a bit. But if you're, how representative is this item of the category it's in, how likely is it to display other behavior that is related to that category? Another one, anchoring and adjustment. Lots of times in situations where you're trying to come up with a numerical answer you'll sort of start out with a guessed anchor and then make adjustments off of that but then you're really reluctant to change the anchor. And the last one is framing which is the way that the question is framed or the way that the situation is framed, right? Is a positive or a negative answer expected? That sort of thing. So, moving on to the next thing. The last one that I wanna talk about here is pattern recognition and this could be a talking of itself because the brain is an amazing pattern recognition engine but this is sort of the last piece of how we're able to do SNAP judgments. Our brain is able to really easily form templates and break down objects into component parts recognizing the components and assembling them into a set of things that resemble something we already know. In particular, the junctions between parts are incredibly important and it's what our brains use to actually sort of figure out what's relevant and what's different about a set of objects back and forth. So, I just sort of threw a ton of like COGPsych 101 at you guys so apologies for that but hopefully that sort of sets the basis for it. So that's all really cool stuff, right? That's fascinating but what the hell does it have to do with code? Here's the thing. Going back to that Abelson and Sussman quote. For your readers, what you really wanna do to make efficient use of their time and sort of optimize their reading of your code, you wanna max out that automated processing, make it as easy as possible for them to engage their automated processing for code. The reason you wanna do that is this graph right here. So on the left, you have the bad situation. 80% of their division of cognitive resources, that limited pie that I was talking about, 80% is dealing with stupid shit whereas 20% is dealing with important shit. On the right, you have the opposite, right? And at the end of the day, what really matters on this, you want people to be spending their cognitive resources not dealing with stupid shit like semantics, like weird little differences with the, you know, differences in stylistic stuff and all that which I'm sure some of my former coworkers are laughing at because I started a number of indentation wars but things like that, right? The more effort you can spend on the important stuff, you can actually dedicate those limited cognitive resources to dealing with high level issues, dealing with conceptual stuff, understanding bigger issues with code and not dealing with all the other little stuff that theoretically you shouldn't have to because you have the cognitive resources to be able to deal with those at a very low level. So a couple ideas of how to accomplish that. Big one, obviously, minimize surprises. The principle of these surprise, I think it's as rubious as something that we're very familiar with and it's a relatively straightforward thing and it gets a little bit self-explanatory, to be honest, but anytime you have a surprise, you know, it kicks you out of that automated mode and kicks you back into a higher level, more costly cognitive processing. Once again, time that you may not necessarily be needing to spend. Another big one, jumping back to working memory, avoiding overflowing your brain stack or your working memory. It's really easy to have complicated code in a method that easily exceeds that list of seven to not or five to nine items, rather, that you can actually store in your working memory. So think of things like nested loops or really long variable names or tons of superfluous local variables, right? Each one of those things are items in working memory that you have to keep track of. And anytime you sort of overflow that working memory, you're gonna run into issues because then when they pop out of memory and you can't remember them, then you have to go back up and scan the code again to figure out where they came from and it slows down the entire process. One other one, enable recognition over recall, right? Remember what I said earlier about recognition being a lot better than recall? That whole random access doesn't work well piece. The best example I could think of for this was method missing, right? Method missing, a lot of magic with that. Enable some really cool things in Ruby. The problem is that it eliminates a lot of the recognition stuff because you can't sort of find the method, right? If you're scanning code, you can't really see the method that's there. You're forced to go back in and recall it or dig into the code and figure out how it works. One other one, tell a story, right? Stories and pictures stick better. If you can figure out how to tell a picture with your code, please let me know because I couldn't quite figure out how to illustrate that one. Aside from like ASCII art in your files, which I don't think quite gets you there. But something like this, right, whoop. So you have a some complicated thing, right? In your method that accomplishes this complicated thing. Tell a story, right? Do something that engages the reader and gives them something to follow. Gives them something to engage with beyond just sort of a reading level. It sticks in their memory better and they're actually able to understand that process slightly better. One other quick one on the heuristics front, if and unless. If you use, if is not so much this, but if you ever use unless and then a negation operator right in front of it, I think, yeah. I can't remember what Jim's quote was yesterday, but God kills a unicorn or something like that. Yeah, it's in the same boat, right? Unless not. And the reason is both if and unless in a language like Ruby that has both of those available to you, it provides a strong framing heuristic on what you're expecting the outcome of this to be. If I'm saying if something, a block, I'm expecting the context, the frame of that block to be a positive context. Unless block, I'm expecting it to be a negative context. And if you do the reverse of that, then I have to engage a different part of my brain to understand exactly why you were reversing that and why I can't process that automatically. And so the last piece of this and sort of the point that all of this ladders up to is really about knowing your audience, right? And they always say this for speaking, for writing essays, for writing blog articles, all this stuff. I think it's sort of a good thing to know for life, but I think it's something that we don't really come across much in code, because we don't think about it, right? Code is written for the prevailing philosophy that code is written for the machine, not for a human reader. So it doesn't really matter as long as it's syntactically correct and it's free of bugs. But realistically, there are multiple audiences for your code, right? If you're writing code for yourself, or for katas, or for throw away one ladder, then it may be totally fine to be clever and ignore all of those things that I just said. That may be great. But if you're writing, if you're being paid to write code and you're writing code for someone else, chances are you're not in that situation, right? You're writing code for a reader, for an audience. You should really be thinking about that audience when you're in the code, when you're writing it. Think about who's actually going to be processing that. Is the computer more important, or is it the user at the end of the day who's gonna be maintaining and building on your code? So, one last thing to leave you with. Your brain is really good with shortcuts, so let it take them. Don't fight it. Thanks. That's all I got. Thank you.