 Hi everybody. I'm Rich Kuzma, an engineering manager and aspiring rationalist at constant contact. Now, software developers are all about building great products. And as an engineering manager, I'm always asking myself, how can I help developers improve this so they can build even greater products? Now, this talk today is not about the latest JavaScript framework. It's not about designing infinitely scalable architectures. This talk is about sharpening your sword. So you can get better at those things and better at, well, everything. So just a quick point of protocol. If you have a distracting thought during the presentation, like I don't know, I forgot to swap the laundry over this morning, or I don't know, hey, Rich, what you just said there is totally wrong and here's why. Just jot it down, take a note, clear your mind. I promise the slides will be available afterwards. And finally, if you learn anything out of this talk, I would encourage you to please steal it, copy it, make it your own. That's one of the best ways we can learn. So starting today's conference, just a quick note on the agenda. I'm going to go over just a quick background of the brain and a useful abstraction model for reasoning about how we think, including some techniques for hacking our behaviors to better achieve what we want. Also going to spend some time talking about loss aversion and several defects or cognitive biases that are in our brains. I believe that just being aware of the bugs that are going on in our minds at least gives us a fighting chance at being able to work around them. I'll also give you some techniques for solving some of these problems. And I'll close with discussion of the autonomic nervous system and how being aware of it and how it impacts your body can help you to fight mission critical production defects. So without further ado, this is hacking the brain. Our brains have about 100 billion neurons, each operating at about 100 instructions per second. Now compare that with the Apple II from 1977. This thing was operating at 1,000 instructions per second. Compare that to the Mac laptop that is running this presentation that is running at like 2.4 billion instructions per second. I'm somewhat amazed how is it that we as a species managed to rise to the top of the food chain and build battery powered pencil sharpeners or fly to the moon all on a 100 instructions per second. Well the answer to that in part is massive parallel computation, but another part of it is caching. Now caching lets us do some some pretty cool things. Nice catch, all right. In fact caching drives, sort of this cached response, drives a lot of what we do during the day. But as we all know in computer science, you know caching is one of those particularly hard problems in computer science, right? What is it like naming things and caching or like the two things that are just a pain to figure out? Well it turns out that in nature as well, caching is particularly hard to figure out, right? Because one of the hard things with caching is when do you invalidate the cache, right? So during surprise. You see right now you're invalidating your cache, you're updating, you're thinking rich. Next time I won't be surprised, right? I know you got a yo-yo. I'm not going to flinch. I'm not going to try to catch a marker. No, I'm updating it so next time you'll be ready, right? You've updated this cache. So this is pretty cool. How does that work? Well I'll turn to a guy named Daniel Kahneman. He's a psychologist, behavioral economist. He won the Nobel Prize for some of his research, more on that a little bit later. Daniel Kahneman wrote a book called Thinking Fast and Slow. And in this book he describes a model or an abstraction layer for thinking about how we think. It's not a physical model. It's just a mental model. And it helps us to think of our brain in two systems. System one and system two. Now system one is that sort of effortless, intuitive, immediate response, right? This is the part of your brain that answers what is two plus two. It's like right there on the tip of your tongue. It's the front of your mind, right? And it's also particularly effortless, right? A lot of our cache responses you just do. You don't have to think about doing them. They just kind of come naturally. Now in software engineering this is a lot like, I don't know, if you're coding this might be like hitting shift command F or something to like format your code or just rattling off a for loop, right? It's not something you have to think about. Your brain just knows how to do it. Contrast that with system two. This is the logical, rational reasoning part of your mind. And this is the system that helps solve problems like what's 24 times 17. That's not right at the tip of your tongue. Really it's me. I'm an engineering manager, so that's not right at the tip of my tongue. But one of the differences between system two and system one is system two is a lot more effortful, right? This is operating also a lot slower than your system one is, right? Now imagine, for example, I were to throw, I don't know, I won't throw anything at you again, I promise. Imagine, I don't know, I were to throw a hamburger at you, right? And you were to try to catch that hamburger with your system two, all right? Well, what would happen? Well, first, you might be thinking, why is this guy throwing a hamburger at me? You might be thinking, well, is that hamburger hot or cold? Like, is it okay to catch it? Should I duck out of the way or should I catch it? And then you might decide, okay, I'm going to catch that hamburger. So what do you do? Oh, I got to expand my arm muscle and I got to like expand this finger digit and this one and this one and then compute the trajectory of the hamburger as it's flying past me and what happens? Well, so system one obviously can let us handle some pretty complex tasks. For example, I'd argue that it powers almost everyone's commute every morning, right? And in fact, for those of you driving here today, you might as well not have even been behind the wheel, right? Your system one is taking over. Your system one is handling all the complicated computations of how do I merge, watch out for cars. But if you ask yourself, did you go through any traffic lights today where they all green? Hope so. Did you go through any tolls? Did you remember to pay them? And this is because system one is taking over and it's a lot better at handling these kinds of tasks. Also want to point out that when you're using system one, it's very, very low on willpower. And as we know, willpower is one of these finite resources, right? You can really drain it. It's good, but you can't use it too much. Well, I'd argue that your drive on 128 used far less willpower than just getting up in the morning, right? Now, system one's great, but it doesn't always work. Consider, for example, the first snowstorm of the year in New England, and everybody forgets how to drive. And let's just kind of walk through what's happening, right? So let's say you're driving along, you know, you're paying attention, right? You're looking out the window, you're checking your phone, right? And then suddenly your car hits a patch of ice, right? And what happens? Your system one is screaming. It's like, stop, break, break. Don't proceed, right? Exactly the wrong thing to do, right? You don't want to break on the ice. Now, alternatively, if you could have, like, page faulted, or somehow given your system two a chance to interrupt and intervene and use a little bit of logic, it might have said, wait a minute, you're on ice. You don't have traction consider pumping your brakes. You know, turn into the slide, right? Get some traction and then slow down, right? But most of the time, system one gets it right. Now, thinking about system one, it seems like a pretty powerful effortless way of doing certain actions. And wouldn't it be cool if we could hack our system one to doing other kinds of actions, right? To make other things more effortless? Like, for example, I'm a big cognitive science junkie. I'm all about, you know, like neuroscience and how do we increase cognitive function and whatnot. And one of the things I always hear from, from neuroscientists, researchers, as they say, you know, the way you improve your brain, you know, get up just a baseline jump on everybody else is simple. Let's eat, sleep, and exercise. Oh, great. Y'all would have do that. That's awesome. What we all know, wanting something is not the same as a plant, right? Now, actually stating that you want to do something does have like a 20 to 30% variance on your behavior. But it's not really that accurate, right? Well, there's a system for kind of tricking your brain into doing different actions that help you reach your goals faster. I kind of, I like to think of the brain as like this big, stupid elephant that you can like train to do things. So one of the tricks you can do, I've learned it's called a trigger action plan. It's also known as implementation intentions, psychologist named Peter Golitzer came up with this technique. And it works like this, you figure out some goal, some behavior that you want to instill, right? Maybe it's eat, sleep, or exercise, and you, you visualize a specific, very specific trigger condition, and then a specific action that you want to take. It's kind of like, like loading an if then condition into your system one. And these things are kind of cool, you can even like chain them together and they have a pretty long time to live. But let me give you a couple examples how this works. I'm talking about eat, sleep, and exercise, right? Well, I want to eat better. And people say you should go on a diet. Like, Yeah, I really should. Yeah, I should. It's really low chance of actually happening. Instead, I'm going to come up with a specific trigger condition and a specific action that I'll take. For me, the trigger condition is, when I'm about to open a can of Mountain Dew, I'm going to immediately pause, turn, fill up a cup of water. I'm not committing to drinking that cup of water. And for me, this works because, well, at constant contact, the Mountain Dew and the water are co located. So this is a reasonable chance that I can do this all the time. And I just trust that my system one will take it from there. So next time I grab Mountain Dew, about to open it, grab the water. Now I might decide, you know what, I'm really tired, I need some caffeine and sugar, and I'll take the Mountain Dew. That's fine. But I've at least given myself a chance, I've increased the chances that I'm going to reduce my my soda intake. Another example is how important it is to sleep regularly. You know, you don't sleep enough, you get, you can get anxiety, and you don't think it's clearly the next day, especially, you know, engineers were like burning the midnight oil, trying to get get coding assignments done. Sleeping regularly and sleeping well is so important for cognitive function. One of the taps trigger action plans that I've installed is when I go to bed at night, maybe you guys are the same, you have like your phone near your nightstand, you go to bed, and you know, pull up the phone, look at hacker news, or like read and read it. And before you know it, like an hour and a half has gone by, and you're like, oh man, it's going to be a long day tomorrow. I'm really tired. So I installed a tap for myself, which says, when I touch the phone, just pause, do a five minute mindful meditation. And that works for me, it's something I kind of know how to do. But hopefully you get the idea of find some trigger that's in your environment and a specific action that has a high probability of taking effect to nudge you closer to that goal. Maybe just just one more example, hopefully this this helps drill at home. This is something that a lot of the engineers in my team are doing now. It's kind of this joke. Everybody comes up, explain in a minute. The trigger goes like this. Instead of taking the elevator at work, I want to take the stairs. It's a really simple, small way to get a little bit of exercise, right? And if you're an Ironman, you might actually be out of breath coming up the top of the stairs. So the trigger that I set up for myself was when I get to the elevator, I'm going to turn and take stairs, right? And it didn't work. It turned out for me, I needed a much more specific trigger. It was like, when I when I approached the platform, like there's this little step right before the elevator, and like as soon as I step on it, that's my trigger to like turn and take the stairs, right? And so I tried it out, let my system one do it and the next day I go to work and I like, get effortless. I didn't have to think about it. It's like, duh, you just go take stairs. That's what you do. It worked. And so now I don't even know like the elevators there and then other guys are doing it. And so everyone comes to stand up and they're like, God of breath. Like, trigger action plan, right? Yeah, you did it. Cool. So just to give you guys give an opportunity to try it. Do you guys have any any habits or goals that that you want to form? Maybe behaviors you want to change? Think of a trigger. Any volunteers? Everybody's perfect. I see, I see one in the back. Yeah. All right, want to go to the gym more? Okay. So can you think of a think of a specific trigger that has a kind of related to your routine that might work over the gym? Okay. And so what's the what's the specific action? So you got it. So you're like, All right, after the dishes are done, I put the last dish in the sink, we'll get a little bit more specific. What's the action that you'll take? Close the dishwasher? All right. Okay. And what what action are you going to take to get your car to get a gym? Let's refine it a little bit more. Let's get something a little bit more specific because get in the car involves like all this effort, right? You have to like, maybe I don't get your gym bag, you're like open the door, you're going to get to the cars or something that gets you a little bit closer to going to the gym maybe helps you commit a little bit more than that. Putting you on the spot, Susie. All right, all right. Awesome. I like it. So just some other ideas for specific triggers. I know that that people have done for trigger action plans for going to the gym. Sometimes you can also set up different cues or hindrances on the way like it might be like for me in the morning, it's when I wake up, I've already got my socks laid out. And for me, the trigger to actually do Tabata sprints and like run for four minutes, which I hate, but whatever four minutes I can do it is to put my socks on. I don't know. So like the action is as soon as I get up, put my socks on, and then like well, I already got my socks on. I might as well just like go run, right? I don't know. It just works for me. But again, that idea of think about a very specific trigger and a specific action you can take to get you incrementally on your way towards your goal. Just want to point out, thank you, Susan. That was a great example. Just want to point out as well, certain triggers that don't work very well are time based triggers. So you might say that, okay, maybe you're trying to write blog posts and you're saying that at two o'clock every day, I'm going to set an alarm and I'm going to sit down and write awesome blog posts like every Monday. This has a really low chance of happening because you might be busy at two o'clock. And there's also not a very specific action that you're taking there. It's like write. Well, I don't know. You might need to get started. So just as an alternative example, if you get something that's more contextual in your environment, it could be, I don't know, at night, you're lowering the blinds on your shade and you're about to sit on the bed. It could be, okay, that's my trigger, lowering the blinds. The action is grab the pencil and paper. And so it's something that you do all the time every day. It's a time when you're probably not going to be interrupted. You'll have a much better chance of doing it. And this also works for mental triggers as well, as we get to in the next part of the talk. So, okay, beyond habit formation, eat, sleep, exercise, I know, but we all want to become better engineers. This isn't a life hacker conference. This is an engineers for engineers conference, right? Well, I'm going to shift gears for a minute in part two and talk about loss aversion. That guy, Daniel Kahneman, the psychologist, behavioral economist. So he won the Nobel Prize in economics for up to work that he did with a guy named Amos Tversky to approve this concept called loss aversion, kind of a complicated graph. You don't really have to pay attention to it. Loss aversion, it basically says, surprise, humans are not rational economic actors. And in fact, we tend to feel pain, like twice as much as we feel a comparative gain. This is like proven study after study after study. This is like truth, right? Psychologically loss feels twice as worse as a comparative gain. Now, I don't know if any of you guys have seen, it's an old movie, Stand and Deliver. Anybody see that movie? Maybe a couple of people. Anyway, Edward James Olmos is in this movie and he's trying to teach kids in the intercity calculus, right? And he starts out the class and he says to everybody, you all have an A in my class. It is yours to lose, right? And everyone's like all excited for many of them. It's like the first time they ever had an A. They're great, right? He probably got between like point two and point four standard deviation points of variance on his student's behavior. They probably worked that much harder because they had an A and it was something that they could lose. We want to hold on to what we have more than we want to grasp something in the future. And why does this matter to us? Let's tie it back to software engineering. Well, Ross's version helps explain this other effect called the endowment effect, also known as the IKEA effect, where we tend to displace an inordinate amount of value on an object that we create ourselves versus a similar object. So like the Lexvic coffee table that you built, it's like, wow, that is so much more valuable than the Lexvic that my neighbor built, right? Because you build it, you put your own blood, sweat, and tears in there, right? I mean, this really rings home. I think about code reviews. I think about, I've got some code that I'm going to show my peers and get feedback on it. I would argue that the, you know, similar looking code written by another engineer, you may be way more prone or open to giving feedback on or critiquing it than your own. You tend to put a lot more value in that thing that you've created. You hold on to it more. You don't want to break it, right? Like you don't want to, you don't want to dismantle it. It's a work of art, right? We want to hang on to what we have. So be aware of this cognitive bias. It's, I would say, one of the techniques you can use for defeating it would be, or maybe you're about to open a pull request, right? And the trigger could be stop, pretend I am, I don't think of like really smart engineer in your team. What would I say about this PR? Now you're looking at it objectively. You're not thinking about it as your own work. You've kind of shifted your mindset, thinking about someone else. And you'll be a lot more open to changing that. Also, this gets back to the talk that Kiwi gave this morning about giving feedback. She talks about always asking for feedback on your code reviews. Big tip, do it early. If you wait until you're all done, you're gonna be, you're suffering from all these biases, right? You're gonna be far less open to changing it. You don't want to let go of what's already a hundred percent done. So try doing it at like 80 percent or 50 percent. Loss of version is also, also plays a role in the status quo effect. We all know how hard it is to affect change in organizations. Again, we want to hold on to what we have. Well, this is particularly problematic, especially when you couple it with the negativity bias. Anybody have an engineer on your team who's like always shooting down everybody's ideas? Or maybe it's yourself, I know. So it's like particularly insidious because we tend to, when we hear something negative, it has far more influence over something positive, right? So you couple that with the status quo effect, you're like, change just gets that much harder. So what I would say is, you know, if you notice it's happening in your team, you know, call it out. Or at least call it out to yourself if you're not comfortable saying, hey, negativity bias. You know, but at least be aware of it so you can see when it happens and you can correct or update your judgment on whether something's a good idea or not. Be aware if you're being unduly influenced just by negative feedback. The last part of loss of version I'm going to talk about is the sunk cost fallacy. And again, all that work by Daniel Kahneman also helps to slightly prove this phenomenon as well. You know, you may have heard of this as, I don't know, like we spent ten billion dollars on the big dig. What's one billion more? Right now, MBAs and POs that you guys get like drilled into your head in business school, like, you know, sunk costs are sunk. You know, you do not, you do not factor those into whether or not you do something or you continue to do work on a particular project. And so, okay, I think to myself, yeah, I get it, right? I don't do that. I don't do that. Well, let's let's turn it around a bit and think of of ourselves just as developers. Maybe it's a, I don't know, Friday night you're on the way home and you've got this like stroke of genius. Like, ah, I know how to solve this one problem or I know a really awesome thing to build. And you go home and you're like get on the computer and you're hacking and you're going, you're like going all night, like all weekend, and you're putting together this beautiful work of art. You've got unit tests and you've got great code quality and you've written wiki docs and all this stuff. It's like this is going to be awesome. My team is going to love me. This is going to change the world, right? And then Monday you've got a good night's sleep. You're driving in and there's this little like, a little thought none at the back of your mind that says maybe this doesn't actually solve the original problem. But whatever, if you look at the code, it's awesome and I spent all weekend on it. I can't just throw it away yet. It doesn't solve the problem. But so what? I'm sure we can find a way. You're rationalizing now, right? This is like not a good thing to do. I'm sure we can find a way to make this code, to ship this code. You're falling victim to the sunk cost fallacy. It doesn't matter how much time you spent on the project. If it's not a good valuable feature, don't put it in the code. Just don't ship it. So try to use more, more discipline and be aware of that particularly with yourself when you're working on projects. So we talked a lot about loss aversion. I'm going to talk a little bit more about some cognitive biases. These are like bugs in our brains. And one of the things to keep in mind is as we're going through these cognitive biases, you may want to think of some trigger action plans that you could set up conditions where like if I notice I'm falling victim to this bias, like I'll be aware and take some action to overcome it. Also my hope is that maybe some of these biases will become part of your vernacular among your teams where you'll be able to call out some of the things that you see. I'm going to start out with an exercise. Now if you've done this before, this is Watkins rule test, maybe just refrain from participating. I don't want to root for everybody else. Goes like this. I'm thinking of a rule that these three numbers satisfy. I want you to guess the rule. Now how do you guess the rule? It could be so many rules. Well you can give me any three number sequences that you want until you're confident that you know what the rule is. So I'm going to ask for a volunteer to come up. Anybody? Try this out. I promise it'll be fun. It'll be cool. It's all in the name of science. Be aware of bystander effect. Oh, come on up. Thanks. Volunteer. Hey, hey John. All right John, so I guess I'll ask you first off. Do you know what the rule is? Can you guess? No, that's not the rule. Maybe you want to give a couple of guesses what you think it might be? Instead of just guessing the rule right away, let's try to come up with some sequences that you think might fit the rule and then I'll tell you that we have like a higher chance of getting it right. Okay. Yes, that fits the rule. Oh, thanks Eric. Yep, that fits the rule. Yes, that fits the rule. Yes, also fits the rule. You might know what it is. So John, thank you very much. But by the way, I just want to say this particular problem is really hard. Only about 20% of the of the population actually gets this right. The answer, it's not really that relevant. The answer was it's just any incrementing number of set of integers. That was it. Could have been like one, two, three, four, five, but thanks John. And why this, why does this happen? Well, it turns out that, you know, sorry to put you on the spot. Again, 80% of you would have also failed at this, I promise. We're following victim to confirmation bias. And we tend to give more evidence, more weight to evidence that supports our hypothesis, than to contrary evidence, right? So you get a guess, you have a hypothesis, you think, oh, I think it's going to be, you know, incrementing, what are, even numbers, right? And you'll guess lots of even numbers, but you won't think to guess odd numbers, or to think of, like, decrementing negative numbers. You know, it's just a, again, it's a bug in our brains. And most of the population has this problem. It's really, I'd say it's really relevant to software engineering when you're writing your own code and you're trying to test it yourself. You have your own hypotheses about where it might break, and you're going to test those really well. But you're not going to think of other ways that break it. Again, you're going to look for confirming evidence instead of contrary evidence. That factored in with loss aversion, like, you know, you don't want to, like, lose what you've already written, makes it really hard to write your own code. So it's really great to pair with someone else, or try to dissociate, think of, you know, think of it as somebody else's code. How might I test it or break it? You might have a better chance of finding bugs in your code. Another bias that comes up a lot in life is overconfidence bias. This is a phenomenon also known as the illusion of validity. Now, does anybody here think they are, I don't know, like, an above-average driver? Think about more in Massachusetts, right? Anybody above average drivers? Okay, well, it turns out, like, like 93 percent of the population believes they are above average drivers. It's just 50 percent, anyway. Another example, mutual fund investors, like, 74 percent of mutual fund investors, and these are, like, smart dudes, like, mathematical, statistically inclined, they probably know all about these heuristics, they believe that they're going to beat the market average. Even though statistically nobody beats the markets, right? It's like you have a coin toss, it's just as good at determining whether or not this mutual fund is going to exceed the, you know, the standard and poor, whatever index fund you have. But yet, even though we know we're fallible, we continue to make, we continue to act on these useless ideas on these wrong concepts. I also just want to point out this, this may happen, I don't know, I remember growing up, like my dad would never ask for directions, right, when driving. It's like you could be in a completely new area, you have never been there before, like, why would you think that you know which way to go? Like, we have this thought that, oh, it's roads and there's people or something and I must know where to go. It turns out, by the way, men tend to fall victim to this a lot more than women statistically, I don't know why, but we tend to act a lot more in our useless ideas than women do. But just, just something to be, to be aware of, as you're coding. I also want to point you out here, expert judgments, be aware of expert judgments where you're going to seek an opinion from somebody else. Statistical measurements just almost always be expert intuitive judgments. I mean, there's very few examples where that's not the case. If you think of problems where you have a repeated occurrence, that's where being an expert helps. Like, chicken sexing. Anybody knows what chicken sexing is? No. Anyway, this weird thing where you try to figure out if a chicken is going to be a boy or a girl and it's like, it's kind of sad. This matters for chicken farmers because they only want the girl chickens, the boy chickens and it turns out this is a really hard skill to teach someone. The way you learn it is you pair up with another chicken sexing expert who will tell you if you're right or wrong and you just do it over and over and over again and eventually something's happening in your system one, it's figuring out, I don't know, how to tell if a chicken is a boy or a girl, but it just figures it out and in this case an expert judgment really matters because it's a repeated, it's a constant, you know, you got a really tight feedback loop there where you can prime your brain and develop an expert judgment. This is like not anything like any software I've ever written. Every problem we have is always different than the one before it. We never get the luxury of building the same exact system again. So be aware, I mean, take it into account, certainly, you know, architects or other engineers that have done similar projects before factor in the objective details about why an idea might be good or bad. Just be aware of this particular problem. All of these issues, if you couple them together with Halo Effect, just create a recipe for disaster in several programming or just engineering situations. Halo Effect, it goes like this, we give undue merit to like beautiful or high-powered people and or influential people way more than we rationally should. And if you think for a minute, I don't know if anybody's ever done like software evaluations where, I don't know, they bring in like a sales guy, usually like a big company name, brings in their high-fluent sales guy, he's got like suit on, he's got all these really fancy PowerPoints and everything else, he's got the look going, right? Halo Effect, he's triggering this, he's like, I'm gonna look awesome even if I don't have a clue what this software does, you're gonna like me, right? Now you couple that with like the bandwagon effect and you get a bunch of people in the room, maybe there's like a, I don't know, a CTO or a, I don't know, some like high, you know, engineering officer or somebody like that, starts hearing what they say and nods their head a little bit, and you look around the room and everybody else is like nodding a little bit, right? No, no, this is bad, you need to go into these situations with an objective list of criteria that you agree upon beforehand so you're not swayed by like this fancy new feature they have that you don't really value, but man, he's got an awesome, awesome look and this guy sounds awesome, so the product therefore must be awesome, you know, just keep an eye out for that and also note that this plays a really important role with interviewing, I'm sure many developers are, you guys are involved in evaluating candidates, beware of the expert judgment, unless you're interviewing and evaluating like lots and lots and lots of candidates consistently with a tight feedback loop for the same position, the same job, expert judgments don't count for much, you're much better off statistically if you go in with a set of like five to seven criteria that you agree on are valuable and everybody goes in with a plan to evaluate them on those criteria and if they come in and they're like awesome prologue programmers, but that's not on your criteria, don't factor it in, not notice on the prologue, right? If you like prologue you're probably an awesome creative curious engineer and that's cool and that should factor in, but this extra language skill that sounds really cool shouldn't play a role in your decision making. Now, if you guys look at Hacker News or Y Combinator, anyway the founder Paul Graham has said once, he's one of his co-founders, Robert Morris, he said, this guy is my hero because he's never wrong, I'm thinking myself, what do you mean never wrong? How can you always be right? Neil deGrasse Tyson doesn't know, right? My point here is these three words, or I guess it's like four, the contraction, I don't know, it's extreme value in saying these words, particularly in engineering circles, and I'm sure you've all been in a meeting maybe where, I don't know, a bunch of managers are there and it's like some high important issue and they turn to you developer and they're like Rich, what is the replication factor on our production Cassandra Cluster? You're like three? Three? Yeah, three, it's three, oh yeah, it's three, yeah, just say I don't know and have you ever noticed that like really smart people are really good at saying that more often, you know, be accurate, be like a scientist, right? You know, there's never anything is 100% accurate, like pretend like you're a scientist, say I don't know a lot and explore your curiosity. Now earlier I gave you guys a poll and I asked, generally, what's the percentage chance that an engineer will change their mind once they voice their opinion? Well, this kind of gets back to the illusion of validity and overconfidence bias. We change our minds far less often than we think, and once you voice your opinion, you're like anchoring yourself to that. You've now invested capital in it, social capital, like go back to the ancestral environment, right? You know, you say we got some idea, if you backtrack on it, not only you got loss of version going, like your ego is now in there and the tribe might like kick you out, right? So once you voice a solution or an opinion, it's really hard to backtrack on it. So it's a really cool trick for this. If you're ever in brainstorming scenarios, we're trying to figure out how to solve a problem. It's great if teams agree on this. Spend five minutes by the clock, not like five minutes, about five, like literally time it on a clock. Spend five minutes talking about the problem without raising a single solution. Trust me, it works. You will, what happens is if you voice your solutions early, you're going to, you're going to be stuck on those and you're going to start rationalizing and debating. So like rationalize, by the way, is like, is like calling lying truth, telling it has nothing to do with being rational, right? When you're, when you're debating with people, you're not trying to discover truth. Surprise, we didn't evolve to like, find universal truth. No, we evolved to debate, to triumph and win and, and prevent everybody else from, you know, stepping in and giving their ideas. So when you hold off on proposing solutions, you won't be, as inclined, it won't be as hard to take those solutions back and think objectively about other solutions that might come up down the road. It's okay, five minute timer, really big. I'm just switching gears here to the, to the results of that poll. Refresh. Something went wrong. Oh, bummer. Well, that would have been really cool. Okay. So anyway, I'll just give you the trick and this won't be nearly as exciting, but you get the idea. I asked half of you, is there more or less than a 3% chance that you will, that you'll change your mind after voicing your opinion? And the other half of you, I asked, is there more or less than a 65% chance that you'll change your mind? And then I asked everybody to fill in their actual guess. I didn't care about the response to the first question. I only cared about the number you put in. And it turns out, this is called the anchoring effect, and it's demonstrated and it's repeatable enough that I was really confident to like try it out on you guys where just asking that question and putting the word like 65% there, it's in your system one, it's cashed, it's available, you're not going to be able to get it out of your mind. So when you try to come up with a guess, you're going to be weighted far more towards 65%. Likewise, the 3% folks, you're going to give a number more closely related to that 3% number. So if you guys do planning poker, and anybody gives like a story point estimate, you get like call foul, you get like throw the estimate out because now they've anchored everybody. And as much as you try and studies show that you're among like rational communities where they know this is happening to them, the bias is still like 20, 30 percent. It's ridiculous how much you're swayed by this anchoring effect. So just keep that in mind. Why are we doing on time? We got five minutes. All right. The last part of this talk, I'm going to go over a little bit about the autonomic nervous system. This is that fight or flight response. It's responsible for like more of our involuntary responses that happen. A picture for a minute, you know, you're not a software engineer, but you are, you know, like Homo sapiens in the ancestral environment, and there's like a tiger over there, he's going to come get you. Right. And what happens, right? Maybe you got a club or something, you know, in your hand, you see this tiger like, oh, man, oh, man, what's happening? Well, it's part of your autonomic nervous system. It's your sympathetic nervous system. This branch of it is kicking into high gear. And what's going to happen? Well, your, your body is, you're going to start to like tense up, your like shoulders are going to are going to hunch forward. You'll start to protect like your abdomen area. Your, your heart rate is going to increase. You're going to get dry mouth, your eyes, your pupils will start to dilate. You'll get sweat on your, on your arms. Your blood vessels are going to contract and pull everything away from everything that's not absolutely essential to survival, right? Including that system two part of your brain that thinks rationally, right? So you're like all system one, you're like, man, do I club this thing or do I run, right? And what happens? You know, you're like, I'm running. So you run, run, run, run, run, you're going, you're going, you're going. And then like, I don't know, 20 minutes later, you look behind you. Is he gone? Is he gone? Right? All right. Your parasympathetic nervous system is now kicking in. You're sort of shifting the spectrum over to the other side. You're a lot calmer. Your heart rates going lower. You're engaging your system to your rational part of your brain is like in high gear now, because it's thinking, what did I do to get myself near a tiger? And how do I make sure I don't do that again? Right? This is a great mindset, by the way to be in for solving problems. That initial mindset that system one, the tiger is there. Not good. You don't have system two. So to relate this to software engineering, your boss comes in and he's like, oh my God, there's a production outage. We've got to get all hands on deck. The money is just like flowing out the window, right? Like everybody here, you're staying all night. You're going to fix this thing. We've got to get it done. We're going to get status updates every two minutes and your heart rates are going like, oh man, okay, okay, what's happening? You know, blood's moving away from your brain. The one thing you need to fix that problem. It's not engaged at all. Like this is bad on like every level, right? That's my point here. Be aware of these changes in your nervous system. Be aware of, you know, things like your heart rate, maybe intense breathing. If you're stressed, if you're overtired, when you're trying to come up with rational solutions to problems, you may need to step away. Ask somebody else to give it a shot and being aware of this and also teaching people about this effect, hopefully we can create a more calm environment for solving critical mission defect issues so we can solve them faster and more correctly in the future. It's just a recap. We went over the brain, talked about system one and system two, a new way of thinking about how you think, think about when those two systems are engaged and went over loss aversion, several bugs in our brains and some techniques for avoiding them as well as some hacks here like trigger action plans. Maybe think about next time you're saying I should do this or I should do that. Set up a trigger action plan. Come up with some incremental step towards that goal and also the value of saying I don't know and spend five minutes thinking before you anchor yourself with a solution. And by the way, if you're interested in this topic, a lot of stuff online. Take out rationality.org less wrong.com. Also, if you want to if you're interested in giving to charities, I suggest checking out givewell.org. They take a rational approach to evaluating charities to maximize the amount of or minimize the amount of human suffering that's that's happening in the world. So take a look at that. Any questions? Come up to me anytime. Ask me anything. You can follow me on the Twitter and that's it. Thank you guys. And gambling when you have less than a 50% chance of winning. Why do you do that? I don't know why I'm trying to think like and like why did we evolve to do this kind of behavior? It could be right. So everybody thinks they're above average. You think that you're beating the system. You have some some fallible like some fallible reason why you're doing it and you're able to justify or rationalize to yourself that you're going to be able to win the next time. It's just pure junk. And even when you know that's happening, you tell gamblers like hey, this isn't statistically right. Like mathematically this isn't going to work and you do it anyway. So it's really a big problem and like I said, you can't even convince like Wall Street that they're doing the wrong thing. Just invest your resources somewhere else. Thank you everybody. Hope you enjoyed it.