 was sitting on a train, and then at this one stop, a bunch of people got on, and as they were all inching towards their seats, I glanced up and did a double take, and exclaimed, you're a dog perfectly, I swear I'm a chandron. And the guy was like, yes, yes I am. You seem a little bit surprised to be recognized. This was probably before he had spoken to Ted and published best sellers and even featured in documentaries. I had stumbled across his work due to a BBC radio lecture series a few years prior. So Dr. Rima Chandron is a neuroscientist and I think he asked me if I was a student at the university where he gets research, and I'm like, no, I'm a programmer. And then I blurted out, I live with a synesthete. And then the encounter was over. Now this is somewhat less of a non-sequitur than you might believe. Rima Chandron did groundbreaking work on synesthesia, which is a little bit of neural cross-line ring. So for someone with synesthesia, a sensory receptor will be activated, the signal will travel to their brain where it triggers a sensory perception, and there it will bleed over into a nearby part of the brain where it triggers a secondary perception. So they might activate a pain receptor and feel pain, and that might also cause them to hear a certain tone, or they'll hear a sound, and it might have a particular texture or some particular taste. One of my classmates at university organized everything on her desk in pairs because two is such a cheerful cuddly number. Three on the other hand was positively dreary, she didn't like three at all. So the secondary perception is incredibly stable over time. Some people with synesthesia can use this extra layer of perception to help orient themselves. One woman I read about knows that it's time to see a dentist when her toothache turns orange. There's an Australian opera singer whose synesthesia gives her an extraordinarily detailed memory for sounds, and her name is Priscilla Dunstone. A few years ago, Dunstone had a baby. Those first few weeks and months can be incredibly tough. You're helplessly sleep deprived, the baby cries, you have no idea how to fix it. You feel helpless and isolated, and people promise parents that they will start recognizing the different cries. But Dunstone, despite her extraordinary memory for sounds, wasn't hearing it. And finally, on a desperate morning, she got out of paper and started keeping a log of every single one of her baby's cries. Now slowly, she came to the realization that there was no distinction. A cry is just a cry. However, there is, as she listened and experimented, she noticed that at the beginning of a cry, the fussiness, the pre-cry fussiness, was actually quite distinct. I'm hungry, was really different from my gastrointestinal discomfort. And so over time, she reliably assigned meaning to five different pre-cry sounds that her son made, and eventually she started getting out of the house more where other babies were hungry and tired and needed to be burped. And to her astonishment, they seem to be expressing the ways that were remarkably similar. She was hearing it, and others were not. When we talk about expertise and mastery, we tend to talk about knowing facts and being able to explicitly verbalize concepts and execute specific sequences of steps. One of the characters in the King Killer Chronicles observes that playing music is a lot like telling a joke. Anyone can remember the words, anyone can repeat it, but making someone laugh requires more than that. I mean, telling a joke faster doesn't make it funnier. An expert isn't just a faster novice. Experts seem to have this magical thing, and we call it insight or judgment, intuition, brilliance, and they got that way because of some unarticulated range of experiences, practice, perhaps, or seasoning, the enigmatic passage of time. In an experiment, researchers gave terrain analysts two minutes to look at an aerial photograph. Two minutes isn't a lot. It typically takes hours to do an analysis. But after two minutes, one of the engineers started his debrief with an offhand comment that what about some to the area needed to be prepared for certain types of bacterial infections? And a researcher was like, what? You can see bacteria on a picture taken from 40,000 feet. He was like, well, photo showed a tropical climate. And the vegetation was mature and uniform, so the contour to the top of the tree canopy could be taken as a reflection of the underlying soil. And since the soil layer would be relatively thin, the tree canopy reflected the underlying bedrock, which appeared to be tilted into a vetted limestone, and the bedrock determined the pattern of the tree springs and the ponds. And there appeared to be a pond that didn't have a major distributary running away from it. So given the climate, the vegetation, and the stagnant water, the presence of bacteria is a sure bet. The birds have a hard time explaining what they do or how they know. They just know. And their skill is consistent and reliable and kind of mysterious. So if you can't explain it, and you don't even necessarily know what it is, how do you teach it? How do you teach seasoning and perspicacity and judgment? How do you teach intuition? In Japan of the turn of the previous century, there were people who could look at a day old chicken to determine if it was male or female. Now this might seem neither particularly important or particularly impressive, but it led to massive price reductions worldwide because unless you're an expert chicksector, you will be feeding chickens for six weeks before you can reliably tell if it's a girl chicken or a boy chicken. And boy chickens are apparently the new poultry industry. Not only do cockles not lay eggs, they are also typically smaller. Their meat is stringer, and I'm told that they are troublemakers. And it gets expensive at an industrial scale. So the Japanese founded a school in the 1920s to train new chicksectors. Now the thing is most of the time chicksectors cannot tell you how they know. Most of the time they're incapable of pointing to any particular diagnostic structure that they rely on in order to make this determination. So the training consisted of pairing up a novice with an expert and having the novice guess, male or female. And then that's where it would tell them yes or no for two years. And after that point, the newly graduated chicksectors were setting up to 1,200 chicks per hour with an accuracy rate of 97%. All without being able to describe how they know. Traditionally, we're really good at training the deliberate part of the brain. We write tutorials and coursework. We devise drills and practice problems. We assign homework. We're not so good at training the automatic part of the brain. We're exposed to all of these chaotic signals, our whole lives. And the brain just kind of picks through it and figures stuff out. And it turns out there is an entire field of psychology that is dedicated to understanding the conditions under which the brain figures this stuff out. It all started in the 1960s with a researcher named Eleanor Gibson. She designed this delightful experiment that illustrated unambiguously the fundamental building blocks of our ability to develop accurate SNAP judgments. And she'd start by showing a research subject, a completely meaningless swivel. And then she explained that she would show them a series of swivels and would they please identify all of the swivels that matched this reference swivel precisely. And she'd flip through them one by one and the research subject would make the guesses. And she didn't get them any feedback at all after they got through the entire deck. She would start over. This is the reference swivel. Please identify all of the swivels that match. Someone feedback. By the third time through, the research subject would have correctly identified every single target swivel. The brain was discovering different meaningful swivel dimensions. As they were showing the swivels, they suddenly noticed that some swivels went one way and others went the other way. A swivel might have three or four or five spirals in the overall shape of the swivel vary. Sometimes they were round and sometimes they were kind of squished. Our brains are constantly going through this process of differentiation, figuring out which characteristics matter and which don't. And as the brain discovers the dimensions that are important, it starts paying closer attention to those dimensions. And as we pay closer attention, our brains begin to make finer discriminations. Our perceptual resolution increases. Photographers gain a richer experience of light. Musicians gain a richer experience of sound. Industrial tasters gain the ability to evaluate the meetings along 14 different dimensions of flavor. The field of study is called perceptual learning. It explores the ways in which the perception of experts differs from that of novices. There is a fundamental difference between how the brains of novices and experts extract information. For example, novice drummers site read rhythms note by note. They think about rhythms in terms of how long each note lasts. Experienced drummers don't read note for note. They read beat for beat. Each beat has a distinct rhythmic figure. They think about that figure as one coherent idea. And there aren't that many different rhythmic configurations that typically occur on a single beat. So over time, drummers begin to recognize these figures at a glance. Novices see lots of low-level pieces of data, whereas experts see chunks and higher-order relations. When navigating an unfamiliar public transit system, we look at every surface, every sign, every good thing, light, every arrow. We carefully parse and analyze and for good measure, we cross our fingers and we still end up taking the right train in the wrong direction or getting off the wrong stuff. Or exiting the station at the northeast side rather than the southeast side. Over time, as we get used to traveling in the city, our brains start attenuating a lot of the irrelevant information, the signs and the signals that have nothing to do with actually getting to where you're going. And it starts automatically focusing in on a few cues that happen to be relevant. Novices pay attention to both relevant and irrelevant data. They can't help themselves. Experts often don't even notice the irrelevant data. As we gain expertise, our brains amplify the relevant characteristics and it starts attenuating or even filtering entirely the irrelevant ones. And a lot of this happens before the signals even reach the part of your brain where you're aware of perceiving things. So both units and selectivity are about how we extract information. Another stark difference between novices and experts is how efficiently they extract information. Experienced pilots can determine aircraft attitude and situation with a varied glance at their instrument panels. Inexperienced pilots, on the other hand, read and cross-check their instruments very carefully one by one. Novices process things serially whereas experts have a much greater tendency to process things in parallel. And also novices process slowly whereas experts extract information quickly. And finally, whether you're sight reading, drum notation, or navigating a public transit system or cross-checking aircraft instrument panels, novices are going to be drained after doing it. And for the experts, the effort will hardly register at all. So that's the basic science. Discovery effects are about how we extract information. It's about patterns and filtering incoming signals. Influency effects are about how efficiently we extract information. The reasoning of novices is slow and serial and capacitive and strained whereas the perceptual processes of experts are fast and parallel and they don't drain cognitive resources. Where this gets interesting, the point you can take these basic ideas and figure out how to use them to explicitly train intuition. There's a cognitive scientist named Philip Kelman who spent the past 30 years exploring this question. One of the things that makes Kelman's work so fascinating is he's not going to bring you into a research lab and train your intuitive sense of angles of lines on your computer screen. He picks problems that are actual real-world issues. So some of his early stuff in the 1990s was inspired by the fact that every year there were pilots who would land at the wrong airports or get lost on cross-country flights. There's a skill that pilots have and the better they are at it, the less likely they are to get lost flying across country and that skill is visual navigation. You look at the cockpit window, you eyeball the terrain, you look at a map and you figure out if you're in the right place. Now pilots aren't explicitly trained for this skill, they develop it over time through experience and to test the visual navigation skills of experienced pilots, Kelman showed them 20 seconds of video taken from an airplane and cockpit and then showed them a map with three locations marked on it, one of which matched the video that they had just been shown. The pilots spent an average of 30 seconds making their choice and they chose the right location about 50% of the time. In other words, they may learn this through experience, they don't necessarily learn it particularly well. Kelman's team then put each pilot through three hours worth of perceptual training, repeating these short interactive trials 20 seconds of video three locations on a map and by the end of the three hours, the pilots were getting the right answer about 80% of the time and they were spending less than 15 seconds making their choice. In an interesting twist, Kelman's team also ran the experiment with non-pilots and after three hours of perceptual training, they got to 60% accuracy with a reaction time below 20 seconds. In other words, non-pilots with three hours of explicit perceptual training were outperforming pilots with 2,500 hours of flight experience but who did not yet have the perceptual training. Another real-world example that Kelman and his team tackled was teaching fractions to middle scores. We don't really give students a good mental model for understanding how fractions work or why they work. A lot of students will just accept that there are rules because rules seem arbitrary and presented with a problem and pick one at random and hope that it applies. So Kelman's work wasn't really directly aimed at teaching students how to solve fraction problems. The goal was to get them to recognize the shape of two specific types of fraction problems. Is it giving them the whole and asking them to find the part or is it giving them the part and asking them to find the whole? And he designed attractive trials using problem formulations in multiple representations and the students' task was to determine which of three problems in one representation matched the original problem statement in some other representation. And it was wildly effective. Even though the training didn't address solving actual fraction problems, the students' problem-solving scores improved from 40% in the pre-trial tests to 70% after the training. And when they were tested again several months later, the scores held. This learning was permanent. Over the course of months and years, our brains sort through the noise, identifying the signals, and gradually, our instincts grow. And Dr. Kelman's work shows that not only is it possible to take this haphazard process and make it deliberate, but that in doing so we can compress the learning that happens naturally into a much shorter amount of time. And the question that keeps me up at night is how all of this applies to programming. What is it that programmers with some particular expertise perceive that those without it don't? A couple of years ago, I wrote a blog post and it's where I tweet the friend of my wrote and before posting it, I showed him a code example and he was like, nice, I like it. You have a nice condition on line 26. Some people notice problems at a glance. But it goes deeper than that. When reviewing code, the types of problems that people notice seem to be correlated with their degree of expertise in various areas. People with less expertise tend to point out at a low level and picking, you know, standalone problems. Whereas people with more expertise will tend to focus on problems that aren't necessarily in the code itself, but in the system as a whole. That's my example recently where someone was adding a caching library to a code base and in the pull request review, one reviewer pointed out that there was a variable name and it was kind of confusing. And another reviewer was like, hmm, do we have any metrics that show that this adding caching here will even fix our problem? At work we have an enormous rest API with extraordinarily complex authorization logic. And crucially, we do not want to leak private data. So we have about a thousand integration tests that are specifically testing authorization logic in the API layer. And at one point I noticed that these tests were making 2,500 database calls each. Our continuous integration setup has a time budget and these tests were regularly hitting that budget, which caused CI to time out, which meant rewriting the entire test suite, which was bad enough during development. But during deploys, that's really, really painful. So I submitted a patch that reduced the database calls in this area of the test by about 40%, which when you think about it is pretty bad. But at least the tests were mostly not timing out CI anymore during deploys. So my manager said that I earned my whole year's salary in that one floor request. But the thing is, I got lucky. I'm not particularly good at troubleshooting performance problems. I discovered the cause of this one while chasing down a completely unrelated problem. There are people who are really good at profiling. And they seem to know instinctively what tools to reach for, where to apply them, though eyeball a morass of data, thousands of lines of a profile report, report and I'll put to one line and say, that number seems suspiciously high. It's impossible to say, they just do. And one thing that I am pretty good at is refactoring. I think every technical talk I've done for the past seven years has been about refactoring in some way or another. There's clearly a lot about refactoring that I can explain. But people invariably ask me, how do you know where to begin? Don't know the answer to that. I don't feel like I know where to begin. I just gotta look around and pick a place. About a year ago, PV and I were wrestling with this bug. It was the most maddening thing. It was consistently reproducible in production. It was not at all reproducible locally. And we narrowed it down. Particular type of record kept getting saved to the database with the wrong value in a foreign key. You're a little bit suspicious of all of the layers of metaprogramming and indirection between us and the database. Because we could confirm that we were setting the right value on our object right before it got saved. But then a moment later, it would come out of the database and it would have a different value in the foreign key. It pointed to something else. A completely unrelated thing. Always the same unrelated thing. We were completely stumped. How was this unrelated thing getting involved? It made no sense whatsoever. In desperation, we started walking other colleagues through our problem. And one of our colleagues was like, wait, what's the idea of that unrelated thing? And we showed him. And there was a moment of silence. And he said, I recognize that number. Of course he had those. That's someone's equal to mine. Max named. Sorry. We had mistakenly defined our foreign key column as an int, and it should have been a big int. Within less than three seconds, our problem went from being incomprehensible to being foreheads laughingly obvious. In production, the table that the foreign key points to has billions of records. Of course, the value would get truncated. And of course, we've never triggered this in development. The turning point in almost any debugging story is not that someone sees what the problem is. You know, they see a thing which reminds them of something that they happen to know when they pose the right hypothesis, which leads them to try the right experiment from which they discover the root cause. And something triggers their spidey sense. A few years ago, I had a conversation with Sandy Metz. I had been obsessing over an exercise that asks people to generate the 99 balls of beer song. The song is deceptively simple, but it has some algorithmic complexity that can get you into worlds of trouble. And I complained to Sandy that every single solution I had seen was terrible. I mean, they were creative, they were fun, they worked, they solved the problem, but everyone seemed to be supporting the complexity, trying to hide it in some way or another. And I told her, I despair of seeing a truly good solution if I wasn't even sure there was one, frankly. She came back a few days later than not one, but four good solutions. One of them, in particular, had an abstraction that didn't hide the complexity, but seemed to make it go away. And I asked her, how did you know? And she was like, it was obvious. It was just three years to reverse engineer that instinct into a book that breaks it all down into explainable concepts. A lot of programming does have to do with explainable concepts, with knowing facts, being able to explicitly verbalize ideas, execute specific sequences of steps. But a huge amount of programming expertise seems to be rooted in perception. It relies on this gut sense, this ability to make snap judgments, to just know. In a fit of optimism, I've put together an amateur's guide to designing perceptual learning training materials based on Calvin's work. Another basic component is the brief classification episode. And by brief, I don't actually mean instantaneous. If you can puzzle something through logically, deliberately, then go right ahead, give your brain the time to do so, but if not, that's fine. Just guess, your brain will figure it out eventually. It's crucial that the learner make active judgments. Showing someone something and then telling them the answer is not gonna be much use. Once they've made their determination, they'll give them explicit feedback. And that's basically it, but for this to actually work. You need a really good data set. It needs to have a huge number of examples with absolutely no duplicates. The brain needs an enormous amount of complex variation. This variation should include not just the relevant features, you want to vary all of the irrelevant characteristics. The brain will detect the underlying invariance. That's the whole point of the exercise. The things that don't change. That's what you want your brain to start detecting. You need all that noise. You need those distractors. Otherwise, the brain has no way of figuring out which distinctions are meaningful and which are not. So it will start assigning meaning to certain irrelevant characteristics which happen to not vary enough in that specific data set. This is how each of your brains to be biased. So that's the idea, short interactive trials, targeting a specific perceptual skill using a huge messy complex data set with no answers. So if we want to apply this to the practice programming, we need to identify the target skills to train. There seem to be two approaches to identifying those skills. One is to take a skill that consists of a well-defined activity such as the visual navigation that pilots do where they eyeball the terrain and they identify a location on a map. The other is to identify small, unambiguous taxonomy as the day old chick, male or as a female. Is the fraction problem asking you to find a hole or find a part? Is the baby fussing because she's hungry or because she's tired? Is the data structure valid syntax or invalid syntax? And once you understand the fundamental distinction that you're targeting, the problem is reduced to generating a data set. There's some napkin math floating around that suggests that for the past 40 years, the number of programmers in the world has doubled every five years. Another way of expressing this is that at any one time, 50% of all programmers have less than five years of experience. Over the course of 10 or 15 years, many of them will likely develop perceptual expertise in some areas. But you can imagine that a huge proportion of these developers will rarely be exposed to the good patterns, the useful distinctions. They'll be stuck with all of the internet as their mentor. The signal might forever be drowned by the noise. Their brains won't stand a chance and we can do better. We can deliberately compress these lessons into digestible formats, allowing these new developers to waste less of their time wrestling with the mechanics of programming and spend more of their time solving meaningful problems. Thank you.