 on the podium, so if I keep looking to the right, that's awesome. So usually when I give a talk, I've been one of the people that has spoken in the afternoon. I've never had an audience that hasn't been fully warmed up before. So I was like, OK, well, I can't just jump right into the talk. I have to warm people up. So I'll start with an introduction. My name is Jess Rudder. I work at GitHub. I'm Jess Rudder pretty much everywhere, including Twitter, if you want to tweet at me. I love the Twitter, so, you know. Now, I wanted to ease into the talk, and I was like, all right, well, GitHub had some big news with the Microsoft thing in the last couple weeks, so maybe there's some material there I could mine. And one of the advantages of working at GitHub that they don't make clear during the hiring process is that you get to be on Work Slack with Tinderlove, with Erin. And during the course of this being announced, he was just like a goldmine of puns. And I was like, all right, maybe I could take some of his reject puns and work some of them into a talk. But then I remembered my very first Rails conference was the one where DHH wasn't able to make it because there was a scheduling conflict with a car race that he was doing. And Jeremy Dare did the opening keynote, and he got up, and he made a joke about DHH not being able to be there due to a race. Oh, dear Lord, I forgot, a race condition. There we go. That's the whole point of the joke, a race condition. And everyone laughed. They're like, oh, great pun, Jeremy. And then a day or so later, Erin got up to give his talk. And the first words I ever heard the man speak were, I'm very mad at Jeremy. And he then proceeded to bring the receipts and prove that Jeremy had stolen his joke. And I was like, oh man, I don't want Erin to be really mad at me. So I better not steal any of his puns. And I hope this story has given you windows into our souls. I'm very relieved that you guys laughed, because puns are not something I usually do. And I was like, well, it's going to suck if no one laughs. But then I thought my talk's on failure, so I can just pretend that that was all part of the plan and segue there. But since you did laugh, I'll have to segue with a story instead. And this story starts out, as a lot of stories about failure do, with trying to learn something new. So when I was 26, I decided that I wanted to learn how to fly a plane. And when I let my family know about this, they all said I was crazy. Apparently, the fact that I am scared of heights, that I get motion sickness, and that I didn't know how to drive a car were all proof that I should not be starting with learning to fly a plane. But no one was going to clip my wings. I was going to do this. So we're now six months in, and I'm on final approach for runway 21 in Santa Monica. And I pull the throttle back gently, and I lift up the nose of the plane, and I glide more or less gracefully down onto the runway. And all of a sudden, the plane jerks to the right. And my instructor leaps forward, and he's like, I've got the plane. And I'm like, you've got the plane. My hands are off the controls. And he brings us to a stop, and my heart starts settling down. And we get out, and we look at what happened. And it was a flat tire. This is not a big deal. It happens occasionally. There wasn't a crash. So I was like, OK, awesome. Everything's fine. But I was a bit surprised when my instructor said, OK, well, I'm going to drop you back off at the classroom, and then I'll come back and I'll do the paperwork. And all of a sudden, I was stressed out again, because in addition to not liking paperwork, I was like, oh no, did I just get my instructor in a lot of trouble? But it turns out that the FAA likes to really collect a lot of data on events, whether those events are big or small. And because they know that people don't like paperwork and they don't like getting in trouble, they encourage filling out this paperwork on events by saying that as long as you didn't do anything illegal and no one got hurt and you filed the paperwork in a timely fashion, there's not going to be any consequence. Now think about how different that is with automobile accidents. So there's another story. When I was 12 years old, we were driving home from the Saturn dealership, and this was the first brand new car my parents had ever purchased, a shiny, brand new 1993 Saturn. And we're sitting at a stoplight, and all of a sudden, we lurch forward. We'd been rear-ended. So my dad checks that we're all OK, and he gets out to check on the other driver, an extremely nervous 16-year-old boy that's looking at that shiny new car thinking, I'm dead. And my dad looks and he's fine, and both cars are fine, except for a tiny puncture wound on our bumper from the other car's license plate. And my dad kind of shrugged and said, well, you know, that's what bumpers are for. Encourage the kid to pay a little bit closer attention, and we all went on our way. No authorities were called, no paperwork was filed. Other than our memories, there's zero proof that that accident ever happened. Now these approaches have actually led to very different outcomes. I looked up the most recent stats that were available, and these were for 2015. And for every 1 billion miles traveled by car, 3.1 people in the United States die. And for every 1 billion people that travel, sorry, 1 billion miles traveled by plane, 0.05 people die. And if you're like me, decimals can kind of be hard to hold in your head, especially when you're talking about 0.05 of a person. You're like, what does that even mean? So if you hold the miles traveled steady, you get 64 vehicle deaths for every one airplane death. And I mean, that kind of a discrepancy hides something incredibly interesting that's going on. Because what we have are two very different approaches that lead to two very different outcomes. And the key difference, I think, is how each one approaches failure. You see, it actually turns out that failure is an incredibly important part of learning. Now, I think before we get too much deeper into this talk, it's a good idea if we're all on the same page. Because I may be saying failure and thinking one thing, you may be thinking something else entirely different. So what is failure? I think for some of us, it's probably that feeling you get when there's some splotchy red-faced person in your face yelling at you about something, and you feel like nothing's going right, and you shouldn't have even bothered getting out of bed in the morning. That I failed. And I can absolutely relate to that. But it's kind of hard to measure that. When I was preparing for the talk, I talked to a lot of people, and I'm like, hey, what do you think failure is? And they're like, oh, that one's easy. Failure, you see, is the absence of success. I was like, sweet. I like the sound of that. So what's success? They're like, oh, that's easy. Success is the absence of failure. And I was like, oh, well, this isn't really going to work for my purposes. But the researchers, people who actually do studies on this, they have a very specific definition of failure. For them, failure is a deviation from expected and desired results. And that's something that you can actually quantify and measure. So although I think there's truth in every one of these definitions, the one that we're going to go with is deviation from expected and desired results. Now, I couldn't find definitive data on this, but having been a programmer for some years now, I think if there's one field where you constantly get a deviation from the results that you expected, it's programming. So I actually thought there would be a wealth of examples and evidence and stuff in programming of people learning from failure. But what I found is that we don't really have a lot of that except in video game development. So one of my favorite examples of this is with the game Space Invaders. So are most of you familiar with the game? It's that old arcade game where you're controlling a small cannon that's firing up at a descending row of aliens. And as you defeat more aliens, they speed up making it harder to shoot, right? Yeah, no, that's not actually the game. So what it was supposed to be is a game where the aliens moved at a consistent pace for the entire level. And they would only speed up on the next level. So there wasn't going to be any kind of difficulty curve. But the problem was that the developer, Tomohiro Nishikato, hadn't designed for the real world. You see, it was 1978, and processors weren't that great. And he'd put too many alien sprites on the screen. And it slowed everything down. And so it wasn't until you had gotten rid of some of those sprites that the processor was able to catch up and move at the speed the level was intended to go at. Now, there were a couple of things he could have done. He could have just said, well, you know what? I can't make my game with this technology that exists today. So I'm going to put it on a shelf and wait a few years. And that seems strange, but maybe he was an artist with a vision, and the world was just going to have to catch up. Now, he had another option of changing the game. He could have taken some sprites off the screen, planned for it to move slower, made adjustments there. But instead of doing any of those things, he decided to have people test the game out and see what they thought. And the thing was, they absolutely loved it. As people were playing the game, they started creating stories about the aliens. They were like, oh, man, they're getting so scared because I'm shooting them. And that blue guy, he knows he's next. And that's why he's running across the screen. It was so much fun that it actually ended up being left in the game. And this particular failure created an entirely new game mechanic, the difficulty curve. You see, before this, games were always consistent for the entire screen. And they would only get harder on the next screen. And this created the concept of, all bets are off. I can make the game more difficult at any point I choose as a developer. Now, I don't know if the developer had read any of the research on failure, but he was capitalizing on a lesson that I see again and again and again in failure research. It's not something to run away from. It's something that you learn from. In fact, it turns out that failure actually presents a greater learning opportunity than success because there's more information encoded in failure than there is in success. So think about it. What does success look like? You get a check mark. You get a thumbs up, good job. Your manager smiles at you and you don't go home crying that day. I mean, that's great, but what have you actually learned? The message that you're getting is, just keep doing exactly what you're doing right now. Don't change anything. And that's fine, but that doesn't really tell you what is working. It just tells you don't change anything. Failure, on the other hand, gives you all sorts of information. I mean, just look at how much information is shoved into this failure. If you read it closely, you know exactly what went wrong. And you know where it went wrong. And if you have some experience with this kind of failure, you're going to know probably how to fix it. But even if you don't and you've never seen it before, you've got enough information to search for the problem and figure out how to fix it. And now that you've got experience with an approach that didn't work, you're just going to, like with a little bit of effort, you're going to be able to consistently get a better and better approach and have it work. Now, video game development actually has a very long and honored tradition of grabbing hold of mistakes and wrestling them into successes. And it's such an important concept for them that they have a term for it. They call it the good-bad bug. And having space to learn from failure was really important to this group of developers that were working on a street racing game in the 90s. So the concept for the game had players racing around city streets on a marked course trying to get to the finish line. But because you were street racers, you were being chased by cops the whole time. And the cops would try to catch up with you and pull you over. And if you got pulled over, you couldn't finish the race and you lost. But there was a problem. You see, they got the algorithm for the cops wrong. And they ended up with incredibly aggressive cop cars that would try to slam into the racers rather than just pulling them over. And the beta testers actually ended up having a lot more fun evading the police than just ignoring the marked course all together. And it was so much fun that they ended up redoing the entire game around that concept. And that became Grand Theft Auto. So imagine that. The core concept of the best-selling video game series of all time was a programming error. And if the developers had realized, oh no, we got the algorithm wrong and had panicked and shut the game down and fixed everything, that would have been lost. Like billions of dollars would have been lost. Now, for other games, there's hundreds, if not thousands of hours of work that have gone into a game before the programmers ever even get their hands on it. You've got product leads and designers and business people all putting an input and all of that gets codified in a document called the game design document. Now, this is considered a living document, but it's actually a big deal to make changes late in the programming cycle because it could be millions of dollars to make those changes because it would mean that requirements have to change, art needs to be redone, release dates might have to be shifted, and so no one wants to do that. They wanna make sure that once this document is kind of set that you make as few changes as possible. It's a big deal. But the Silent Hill developers late in the game development were facing the possibility of having to make major changes to their game. They'd started out building the game to the game specs and everything was going fine except they discovered one problem and that problem was pop in. You see the PlayStation's graphics card wasn't powerful enough to render all of the buildings and the textures in the scene that had been designed. So as you walked forward, buildings would suddenly pop in and out of existence. There would suddenly be patterns on walls that hadn't been there before and it was really distracting because it kind of took you out of the immersion in the world and especially for a survival horror game like Silent Hill, that's really important that you stay immersed. So they were like, oh man, we're gonna have to change things. And it would have been really easy for people to start pointing fingers at each other because no one person was responsible. You had designers that put one or two more buildings in the background to make it interesting. Tech teams that had said let's design this for the PlayStation instead of the more powerful Jaguar, the business team that determined the release date that minute would be on the PlayStation instead of a future system. Like all of those things were the right decision for that team but put together, they snowballed into a bigger problem. The whole system in a way had failed but instead of running from that failure, the Konami team got creative and they sidestepped it. They added fog. They filled the world with this very dense, eerie fog and as it turned out, fog is really lightweight for a graphics card to render because there's less light to worry about. And as an added bonus, fog is really creepy which is really great for a survival horror game. And it was so, oh and it made it so you couldn't see stuff in the distance so there was no pop in anymore. Like when things suddenly appeared, you're like, oh it's coming out of the fog. It wasn't just popping into existence. And this fog became such a staple of this that even when they were on much more powerful systems later, they still put fog everywhere because people were like, if there's no fog, there's no silent hill. And so that's like another example of programmers ripping success from the jaws of failure just by embracing it. Now these examples help to illustrate what's happening in the more high state situations that we talked about earlier with airplane and automobile accidents. The aviation system is able to save so many lives because accidents are treated like lessons that can be learned from. Data is gathered and aggregated and they identify patterns. If an accident is caused by a plane or by a pilot being tired, they never stop there in the investigation. They're like, okay, the pilot was tired and made a mistake. Why was the pilot tired? Did he have enough time to get sleep or are you putting enough time between flights? They're always looking at the system at cause. And in contrast, who do we usually blame for an automobile accident? We just blame the driver. We're like, oh, they must have been distracted. They must not be a good driver. And as a result, when people are in automobile accidents, they often try to hide what happened. They're like, well, I didn't see the stop sign and even if it was there, it's covered up but it's definitely not my fault. Now very few of us, I'm assuming, I could be wrong. Maybe you're all pilots, I'm guessing that we're not a room full of pilots but I think that there's a lot that we can learn from how they handle failure. And if you're willing to create a system and track and learn from our failures as we write code, I think that we're all gonna be much better developers as a result. But that kind of begs the question, what would that system look like? In broad strokes, I think that it would involve kind of three important steps. And the first one is avoid placing blame. Document everything, and then abstract patterns from that data. So this first one, it's easy to say avoid placing blame. I think in reality you can have entire talks about how to calm the inner demon in your head that's like, you suck and it's all your fault. But one thing that I like to remember is that with aviation failures, like I mentioned, they never stop at the top level of blame. So there was this one case where a pilot had made a critical error by dialing in the wrong code in the flight computer and had ended up crashing the plane into a mountain. And on the cockpit recording, they could hear him yawning and talk about how he was really excited that he would finally get a good night's rest. And they could easily have stopped there and said, well, the pilot was tired, he made a stupid mistake, and that's what caused the crash. But they wanted to know why he was tired. So they actually checked that he had had a hotel available to him during his layover. But that wasn't enough. So they made sure that he had actually checked into that hotel. And even that wasn't enough. They looked at the data from his hotel card to find out when the doors had opened and closed to determine that the maximum amount of time he could possibly have had to sleep was about four hours total. And even then, they still looked at the flight computer and they were like, oh, if you're at all tired or distracted, this tiny three-letter readout is really easy to get confused by. Now, they're, of course, willing to point out where individuals have made mistakes, but they want to focus on what went wrong in the whole system. They don't like blaming individuals. They like finding something systemic that they can fix. So kind of the point here is that if your takeaway from a failure is I'm stupid, coding just isn't for me, I don't know how to do this, you're learning all the wrong lessons. And I understand if you're not at the point where you can quiet that inner voice in your head, but I think if you can just ignore it for a while and work the rest of the system, you might get to a point that you're like, oh, failure's fantastic. Or at least failure's bearable, I don't know. I'm not at the fantastic part yet myself. I like winning, whatever, but I'm working on it. So step two, as I mentioned, was documenting everything. So even the things that seem small, in a way I think the small things are the most important things. If you think about that flat tire on the runway in Santa Monica, that's not a big deal at all. But the FAA knows that little things can kind of snowball into a bigger problem, just like we saw with the Silent Hill example. So what they wanna do is catch the small errors and fix those. So they take the information about that flat tire, look at how recently the plane had been inspected, see if there's a pattern, if they should adjust the rules. And that's what you wanna do with your programming. You wanna actually start tracking all those problems and catching them early on. So how do we document things? I'm a big fan of paper documentation. I like writing things down. I don't think there's something magical about that. So if you like documenting on like notepad or something else, do people use notepad anymore? I don't know. We'll pretend that's a thing still. You know, the form of documentation's really up to you. But what you should include in there is what you were trying to do, what went right, what went wrong. Anything that was kind of surrounding it. The more specific that you can get when you're recording the outcome, the better. So if you were trying to get data from a Rails backend out of an alt store into your React component and it keeps telling you that you can't dispatch in the middle of a dispatch, it's really not helpful to write, react as stupid, I could do all of this with jQuery, why is my manager torturing me? Trust me, I've tried it. That's not how you learn. Now the final step is that you wanna take all this data that you've been gathering and you actually wanna start finding patterns in it. So just imagine how powerful that data can be. As you go through it and start looking for patterns of when you do your best work and when you do your worst work. So instead of kind of vaguely remembering that you struggled the last few times that you tried to learn how to manipulate hashes in Ruby, you could actually see that you were really only frustrated in two out of those three times. And when you dig into that information, you can see that the one where you felt good versus the other two is that you were rested for one and tired for the others. Or maybe you notice that you learn more when you pair, or when you have music playing, or when you just see in a delicious pineapple from Kona, Hawaii, which I highly recommend. On the flip side, you might find that you do your worst work after 8 p.m. or when you're tired or you haven't snuggled a puppy for at least 20 minutes before you start coding. Those are all really good things to know and good excuses for getting an office dog. The thing is that it's gonna be a lot easier to identify the parts of the system that do and don't work for you when you actually have a paper trail. And you're also gonna have a really nice log of all the concepts that you've struggled with. So let's say you read your documentation from your last Epic Coding session and you see I was trying to wire up a forum for my rate this Raccoon app. And it worked, sort of. The data I got, the data I got where I was sending it, but it put it all in the URL. And that's kind of weird because I don't usually see that. And that's awesome. You have a problem, but it's a very well-defined problem. And with a little bit of research, you can find out that you were using a get action and that that puts it in the URL and that what you really want is a post request because that puts it in the body. And now you just need 20 minutes of puppy cuddle time and you're ready to go fix that Raccoon app. Now, so far through all of this, I've been focusing on individuals. But teams can also learn from failure and it's actually really important for teams. There was a famous study that looked at hospitals. They looked at patient outcomes at demographically similar hospitals. And strangely, contrary to what they initially thought, they found that hospitals that had nurse managers that focused on creating a learning oriented culture instead of a blaming oriented culture actually had higher error rates. So when people were focused on learning instead of punishing people for their mistakes, they made more mistakes. But they actually had better patient outcomes. You have hospitals where more mistakes are made but the patients are doing better. And they were like, this is so weird, like we would not expect this. And they dug in and what they found is that at the hospitals that had the blame oriented culture, people were so terrified of losing their jobs that they would cover up mistakes. And when they covered up mistakes, it had the kind of double impact of, A, making it more dangerous for patients because instead of like fixing the mistake, they tried to cover it up and then people weren't aware of what had happened. But B, things never got fixed. There were systemic problems that led to those mistakes but because they were covered up, people kept making the same mistakes over and over and over again. And it's the exact same with our teams. If you show me a team that punishes people for making mistakes, I'll show you a team that makes a whole lot of mistakes where people just hide it and like overwrite their get history. If you focus instead on things like blameless postmortems and rewarding people for experimenting, even if those experiments don't work out, you're gonna have a much stronger team overall. Now like anything else you try, the process I'm proposing might not work for you the first time around. And at the risk of being a little too meta, that's also something that you can learn from. You can learn from how you learn from failures while you're learning about failures or something like that. The most important part is just to get more comfortable failing and being honest about it and gleaning info from it. And eventually you're gonna find that every bug is actually a feature as long as you can learn from it. Even if you end up having to rip out every single line of code and starting over. Now traditionally that had been where I would end this talk but I found that people would come up to me afterwards and it was almost always the same kinds of people it was often women of color. And they were like, hey, I loved your talk. I really, I was gonna say I'm inspired by it. That sounds a little egotistical. No one ever said they were inspired by my talk. They did say they loved my talk. But the thing is that I've had to work really, really hard to be respected in my field and be seen as a competent engineer. Do you think it's okay for me to fail openly? And I wish I could tell them absolutely. But the truth is is that the research shows that the further you are away from like someone sort of idealized version of an engineer, the less likely, like the more dangerous it is to be seen failing. And that really sucks because that means there's a whole group of people out there that don't get to benefit from failure in the same way that the rest of us do. And it's kind of, so I learned to read really early on and in the US that's enough to be labeled gifted. And so I was never scared of asking questions. And so when I went through my boot camp, I was always on the front row hand in the air. And then when I started being an engineer, I would always ask questions. And if I thought other people had questions they were embarrassed to ask, I'd ask on their behalf. And so I learned a whole lot really, really quickly. But it also had this side effect of one day my manager was talking to me and kind of casually said, oh, you're like me, I think you struggled to learn how to code. And that was strange because I was like, oh, coding was like one of the first things that haven't ever just made sense to me. But things were good and I was getting all the assignments I wanted. And so I didn't feel the need to correct him. And then the company struggled with money and the team had to be downsized. And I was one of the people that got cut. Now we had a fantastic team. I don't know if I had been seen as like more competent if I still would have been kept. But I do know that when you are shrinking your team down to a skeleton team you don't keep the person that you think struggles to learn how to code. Now on the flip side of this, five years earlier I had applied at GitHub as a supporter cat. So one of the help support people. I went through five rounds of interviewing and I thought I've got this job and then I got an email that was like we think you're fantastic but we've decided to go with someone who has more experience. And that was kind of a bummer. But later, about a month later I saw this thing about learn to code in 12 weeks and I was like that's not a thing. And it's not, but you can start learning how to learn to code in 12 weeks. And I went down that road and now fast forward five years I'm one year into being an engineer on the GitHub team and I don't think that that path from not being experienced enough to be a supporter cat to being one of the engineers happens without me kind of being willing to fail and fail a lot and ask questions. And that works for me because I have no student debt and I'm not supporting a family. I have another income in my household. So when I lost my job it wasn't, I mean it was very ego bruising. I spent some time curled up on the kitchen floor crying that no one would ever hire me again. But it wasn't dangerous for me. But there are people that it actually is dangerous for and you can kind of get an idea of who those people are by looking around at conferences and seeing who's not here. I've spoken at conferences in Canada, Australia and the United States and have never once seen any significant presence of native peoples at any of those conferences. There are very few African Americans and Hispanics. These are people that we make it dangerous to be seen as failing. So for those people I would say, like present competence and confidence externally until you find that it's safe at a company. For the rest of us I would say make it safer for people to fail. Like make sure if you're a manager your team knows they can fail. Make sure your teammates know that you're not gonna get blamed something and then talk about how they don't know what they're doing. Do that by being kinder to yourself when you fail as well. And then hopefully through doing this we can find a way to make the industry as a whole safer for people to fail because that honestly is how we're going to learn and become much, much better at our job. So I will get off my soapbox now. As I mentioned, I'm, oh, well hit the in button. I'm Jess Rudder. I love talking code so I have a YouTube channel, YouTube slash, or youtube.com slash compchomp where I talk, I tell code related stories so it's not tutorials it's code related stories. And I'm on Twitter and obviously I love to talk so if you have any questions and I have tons of GitHub stickers so hit me up later. Okay.