 So I was actually having a conversation with someone right before this, and how many people have been to a lot of Ruby conferences, like let's say more than five. Okay, cool. Out of curiosity, has anyone ever thrown up on one of the live streams? I am asking for a friend. Just kidding. So if anyone in here is cold, it's because you're all sitting so far apart and you just need to like make a friend. I'm kidding for those of you watching from home. This room is packed, standing room only. Anyhow, thank you so much for coming to my talk, The Good Bad Bug, Learning from Failure. I'm Jess Rudder, that's my Twitter handle. I'm also compchomp on YouTube if you're into YouTube. This is probably the easiest way to find me, and I love tweeting and being tweeted at, so feel free to do that. All right. When I was 26, I decided that I wanted to learn how to pilot a plane. Now my friends and family were pretty skeptical. You are afraid of heights. You get motion sick. You don't even know how to drive a car. And every single one of these things is actually very, very true. But I didn't see how it was relevant. No one was going to clip my wings. So now we're six months in and I'm coming in to final approach to runway 21 in Santa Monica for just a routine training flight. I pull back on the throttle. I pull the yoke up so that I'm just gliding in and I have a beautiful, more or less, landing. And suddenly there was a sharp pull on the yoke and the plane jerks to the side. And my instructor Jeff was like, I've got the plane and my hands were up in the air and I was like, you have the plane because I didn't want to be responsible for it at this point. And he safely brought the plane to a stop. And he notified the tower to let them know what had happened. And we got out to look at the damage. A flat tire. My heart rate had finally started to return to normal. The plane was safely stopped and a flat tire wasn't a big deal. We were just going to have to go back to maintenance, change out the tire. The runway was going to be blocked for less than five minutes and everything was fine. So I was actually pretty surprised when Jeff said, hey, I'm going to drop you back off of the classroom and then I'll come back and fill out the paperwork. My heart rate jumped straight back up. Whoa, whoa, whoa, whoa, no one got hurt. Why do we have to fill out any paperwork here? I don't understand. And he said, I mean, I was the biggest fan of paperwork. I also was really not a big fan of being in trouble. So I was really hoping that I hadn't gotten him in trouble. And Jeff was like, hey, it's no big deal. You see, it turns out that the FAA collects data on pretty much every event, big or small. Even just a tiny tire blowout on a runway in a four seat airplane. They want to get as much data as possible so that they can start working out patterns that can help them implement safer systems. But they know that having more data means that they're going to be able to draw better conclusions. But they also know that people don't like paperwork or getting in trouble. So as long as no laws were broken and nobody got hurt and you filed a report in a timely fashion, you're not going to be in trouble. Now think about the wildly different approach that we have for road accidents. When I was 12 years old, I was riding in the back of my parents' brand new shiny Saturn. We were literally coming home from the dealership in the first car that they had ever bought, the first new car. And we're sitting at a stoplight and suddenly we lurch forward. We had been rear-ended. My dad got out to check, well, we were all okay. And he got out to check on the driver who was an extremely nervous 16-year-old boy looking at this brand new car that he had just hit. And my dad looks it over and he says, there's just a tiny hole in the bumper from the guy's license plate and goes, well, that's what bumpers are for. He reminded the kid to try to drive a little bit safer, then we all piled back into our not-quite-as-shiny car and we drove home. Not a single bit of paperwork was filed. No data was gathered. There actually isn't a single group in the United States that's responsible for collecting data about road accidents. It's usually handled by local agencies. And I can tell you, if you call the police when you have a flat tire, they're not actually really happy to hear from you. They're probably going to hang up. There's just no one that wants to do it unless someone was injured or you're filing an accident claim, sorry, an insurance claim. So these two different approaches have actually led to very different outcomes. I looked up the most recent stats that were available, which were for 2015. And for every 1 billion miles that people in the U.S. travel by car, 3.1 people die. And for every 1 billion miles people in the U.S. travel by plane, there are only 0.05 deaths. Now, if you're like me, decimals, especially when you're talking about tiny fractions of a person, can be difficult to comprehend. So that's a little bit easier. If you hold the miles traveled steady, 64 people die traveling in cars for every one person that dies traveling in a plane. There's something really super interesting hiding in that data. We have different approaches that lead to two very different outcomes. And the key difference when I started to dig in really was how each one approached dealing with failure. You see, it actually turns out that failure is an incredibly important part of learning. Now, before we go much further, it's probably a good time to make sure that we're all on the same page when we talk about failure. So what is failure? I think for some of us, it's probably that sinking feeling you have in the pit of your stomach when everything's going wrong and there's a person with an angry bright red face screaming at you and you're like, why did I even bother getting out of bed today? And I can definitely, definitely relate to that. When I was doing prep work for this talk, I started looking online and a lot of lay people were like, this one's easy, failure is the absence of success. I was like, that sounds really right. But what's success? Oh, no worries, they said. It's the absence of failure. Okay, and that's the absence of success. And now we're in an unending loop and that doesn't work for you in programming. So I went to the experts and the experts have a very specific definition of failure. Failure to them is deviation from an expected and desired result. Now that actually, that's pretty good. I mean, honestly, I have some truth in every single one of these definitions. But this last one, this one is actually measurable and testable. So we're going to stick with this one for the rest of the talk. Now, I couldn't find any definitive data, but I think that programmers are some of the people that have more results than anyone else that deviate from our expectations. So I was thinking that programming would actually be an area that would have a lot of people that were learning from failure. But one of the few places in the programming industry that I could find people capitalizing on failure was actually video game development. So one of my favorite examples of this is the game Space Invaders. You guys, you know the game, right? So it's that old arcade game where you control a small cannon and you're firing at a descending row of aliens. And as you defeat aliens, they speed up making it harder and harder to shoot them, right? Nope, that is not what the game was supposed to be. Space Invaders was developed by a guy named Tomohiro Nishikado. This was back when a game could be developed by one person in six weeks instead of 6,000 people in 27 years. So his plan was actually to have the aliens remain at a constant steady pace, no matter how many you killed, until the end of the level. And it was only when you switched to a new level that the aliens would speed up. There was just one problem. He had designed his game for an ideal world. Now, I don't know how good you are at modern history, but 1978 was far from ideal. He'd actually placed more characters on the screen than the processor could handle. And as a result, the aliens actually chugged along at a slower pace and only reached the intended speed once enough of the sprites had been killed off that the processor could catch up. Now, he had a few ways that he could have dealt with this. He could have decided to shelve the project and wait until the hardware caught up. And that might seem silly, but he's an artist and maybe he has a vision and he won't change a thing. Now, he also could have decided, while the processor can't handle that many aliens, so I'll change the spec. Instead of having 60 on-screen at once, I'll have 20 on-screen at once. And I mean, that's reasonable as well. But instead of being rigid and just making a decision right away, he decided to test it out. So he gave it to game testers and they loved it. They loved it with the broken algorithm. They got so excited as things sped up. They actually started making up stories in their head when they would describe the game and they're like, the aliens are starting to get nervous because I'm killing so much of them and that's why they're starting to move faster. But they can't get away from me because I'm even faster than they are. And it ended up staying in the game because people just, that was their favorite thing. Beyond that, in addition, it became an entirely new game mechanic called the difficulty curve. So prior to this, games would remain at sort of a constant difficulty until the end of the level and then the next level it would get harder. And with this, people realized all bets are off. You can make a game as difficult as you want and switch it up anywhere in the middle of the game. So I mean, that's pretty amazing. He failed, but by testing it out, seeing how it worked, he actually succeeded more than he could ever have imagined on his own. And I don't know if he did this because he had read the studies on failures and he was capitalizing on them. But the thing was he actually was doing a great job of doing exactly what the research says. You shouldn't hide from your failures. You should learn from them. And it turns out that failure presents a great learning opportunity. And the reason is that there is more information encoded in failure than there is in success. And let's think about it. What does success look like? A check mark? A thumbs up perhaps? Maybe a smile from your manager and a job well done? And when you get these, what have you actually learned? Well, they've done research on this. And research shows that people and organizations that don't experience failure become rigid. And the reason is that every bit of feedback that they're getting tells them, don't change a thing. Just keep doing exactly what you're doing and everything will be okay. Failure, on the other hand, looks a lot like this. I mean, just look at this. Look at how much information is available in this failure. I mean, right away, we know exactly what went wrong. We know which line in the code has an issue. And if you have experience with this particular issue, you're probably going to know exactly what you need to do to fix it. And even if you've never seen this before, you're just a quick Google search away or Bing search or any other kind of search you prefer away from pages worth of information about this particular failure. Now that you've got experience with an approach that didn't work and you've done some research about what else you could do, it's probably going to be pretty simple and straightforward to write something that does work. Video game development actually has a long and honored history of grabbing hold of mistakes and wrestling them into successes. In fact, this concept of exploiting failures to make your programs better is so important that it actually has a name, the good, bad bug. And having that space to learn from failure came in really handy for a game developer group in the 90s that were working on a street racing game. So the concept for the game was that players would race through city streets while they're being chased by cops. And if the cops caught up to the drag racers and pulled them over, you lost. You couldn't complete the race. But there was just one problem. They kind of got the algorithm wrong and the cops were way more aggressive than they had intended. Instead of pulling the drag racers over, the cops would slam into them. Now, the beta testers actually had way more fun trying to get away from the police than they had just kind of racing through the city trying not to get pulled over. And as a result, the entire direction of the game was switched up and the Grand Theft Auto series was born. Now, I want you to think about that for a minute. The concept, the poor concept of the best-selling video game franchise of all time, wouldn't have been lost if the developers had panicked when they realized that they got their algorithm wrong and had tried to cover it up and just pretend it never happened. But instead, they actually let people test it out. And they realized, hey, we're on to something here. And that something was billions of dollars. Now, there are games, well, any program really that gets large enough might have a whole lot of work that goes into it before you ever get to write a single line of code. There may be hundreds if not thousands of hours done by product leads and designers and business folks before developer ever gets to it. In game development, this is encapsulated in a document called the game design document, or the GDD. And it's supposed to be a living document. You are allowed to make changes to this document. But when you're really late in the game, literally and metaphorically, making any kind of changes is a big deal. It means that you're going to have to change tech requirement pages, art pages are going to need to be redone, release dates might have to be pushed back, budgets might be off. I mean, you guys get the picture. You can change it, but it's a big deal. Now that's the unhappy reality that the Silent Hill development team were facing. They'd started building out the game to the GDD specs. But they had one problem, pop in. You see, the PlayStation's graphics card could not render all of the buildings and textures in the scene. And as a result, when you step forward, buildings would suddenly pop up into existence, and blank walls would magically have textures that weren't there before. It really distracted people from the game. And that's not good in any game, but if you're building a survival horror game, that's really bad because the atmosphere needs to draw you in. You can't have it popping in and out, and it just doesn't work. Now it would have been easy for everyone to start pointing fingers at everyone else, because they'd all kind of played a part in this failure. Designers put one or two more buildings in just to make it look even better. The tech team decided to make it for the PlayStation instead of the more powerful 3DO or Jaguar that were available at the time. The business team had determined the release date. There wasn't one single individual that had obviously made a bad call. There were just a bunch of tiny, tiny issues that started to snowball until an entire system had failed. But instead of running from the failure, the team at Konami decided to sidestep it. They found a way to work around their failure. They filled the world with this really dense, eerie fog because it turns out that fog is actually a very lightweight thing for a graphics card to render, because the heaviest cost on a graphics card is usually light, and fog is all about, hey, I guess what, there's no light. So now it obscures the objects in the distance, which means that as you are walking back and forth, it didn't matter that a building or a texture in the background popped in, because you couldn't see it anyway. And as an amazing added bonus, this fog was super creepy. So creepy, in fact, that once the technology had caught up and they didn't need the fog to hide things anymore, they kept the fog in there, because people were like, you can't have a creepy, silent hill game with sunny skies and seeing 20 miles into the distance. This became part of the game. That was another success ripped from the jaws of failure. Now, I've given you three examples from the programming world, and what I wanted to do was illustrate what was happening at our more high stakes example in aviation and automobile accidents. The aviation system saves so many lives because accidents are treated like lessons that we can learn from. They gather data and aggregate it and find patterns. If an accident was caused by a pilot being tired, they never just stop there. They look at pilot schedules and staff levels and flight readiness checklists to determine what contributed to the pilot being tired. In contrast, who do we blame for road accidents? Yeah, the driver, right? She doesn't know what she's doing. He is an idiot. Why does he have a license? It's always the driver's fault. What I'm trying to illustrate here is that airplane accidents are treated as failures of entire systems, where road accidents are treated like failures of an individual. And with all the judgment that comes from being found to be like a bad individual, it's no wonder that people try to cover up those failures instead of acknowledging them and learning from them. I mean, how many times have we been like, I definitely stopped at that stop sign. And even if I didn't stop at it, it's because it was hidden behind a tree or a shadow or something. Now, we're not all pilots here, but I think that we all have a lot to learn from how they handle that failure. If you're willing to use a system to track and learn from those failures as you write code, you are going to have so much better results. But what should that system look like? In broad strokes, I think that there are three important parts. And the first one is really important. You have to avoid placing blame. You have to collect data. That's the second one. And you have to abstract patterns. So all right, step one. You need to make sure that you understand that you are not the problem. And I'm sure this is easier said than done. You could probably have an entire talk about learning not to beat yourself up. With aviation failures, they never stop at that top level of blame. There was actually this case where a pilot made a critical error by dialing in the wrong three digit city code in his flight computer. And on the cockpit recording, the investigators could clearly hear the pilot yawn and talk about being excited to finally get a good night's sleep. They could easily have stopped there and been like, well, we figured it out. The pilot was tired and he made a dumb mistake. But it was not enough for them to know that he was tired. They wanted to know why. So they verified that he had had a hotel available to him during his layover. And that wasn't enough. They made sure that he had actually checked into that hotel. And that wasn't enough. They looked at every time his card had been used to open the door so that they could get an accurate picture of how many hours of sleep he actually could have gotten. And even then, they didn't just say, oh, we've shown that there was no way he could have gotten more than four total hours of sleep. That was the problem. They looked at the three-letter readout on the flight computer and thought, man, if you're all tired or distracted, that's going to be really confusing. And when they had that full picture, that's when they said, now we know what caused this accident. Now, if you see from that example, they're not avoiding placing blame on the pilot. They did acknowledge that he was tired. But they always looked at it as a systemic failure. They want to know what happened in the entire system. So if there's one thing that you can take away from a failure, encode or anywhere else, well, if the only thing you take away from it is, I'm dumb. I just don't know what I'm doing, this probably just isn't for me. You're actually missing out on the best parts of failure, the parts that reveal to you a better way of doing things. And I know that right now it might be a little bit hard for some of us to quiet those inner critics. And that's OK. It's not something that you just have to fix overnight. But if you can at least do a good job of ignoring them for a while and working the rest of the system, eventually you're going to find that they're going to start contributing helpful insights instead of just telling you how bad you are at coding. Now, step two is documenting everything. Even the things that might seem small. Heck, especially the things that might seem small. I mean, remember the story about my flat tire? That flat tire on the runway in Santa Monica? That was not a big deal. But the FAA wanted to know about it because figuring out how many times that happens could start to reveal bigger patterns in places where things snowball into major problems. And catching problems early on and course correcting is going to help you avoid those major meltdowns. But how should we document things? I'm actually a big fan of paper documentation. But as long as you have some sort of record, any kind of documentation that you can go back to, whether it's notes on your phone or live tweeting in events that you can see it later or anything else, you're going to be all right. Now, the things that you should include are details about what you were trying to do, what resources you were using, whether or not you were working with other people, how tired or hungry you are, and what the outcome was. You should really be specific, especially when you're recording the outcome. If you're trying to get data from your rails back end out of your alt store into your React components and it keeps telling you that you can't dispatch in the middle of a dispatch, don't just write down, React is so dumb and I could do all of this with a couple lines of jQuery and I don't know why my boss is torturing me. Because that's actually not going to help. I know because I try. Now, the final step once you have all of this data is to start abstracting patterns from it. Imagine how powerful that data is as you go through and you start looking for those patterns. When do you do your best work? When do you do your worst work? Instead of vaguely remembering that you struggled the last few times you tried to learn how to manipulate hashes in Ruby, you'll see that you really only had issues two of the last three times. And the difference between the one where you felt good and the other two was that you were well rested for that one. Maybe you notice that you learn more when you pair or when you have music playing or when you've just eaten some amazing pineapple straight from Kona, Hawaii. On the flip side, you might discover that you don't learn well past 9 p.m. Or that you're more likely to be frustrated with something new if you haven't snuggled with a puppy for at least 20 minutes prior to opening your computer. That's a really good thing to know. It's a lot easier to identify the parts of the system that do and don't work for you when you have a paper trail. And you're also gonna have a really nice log of the concepts that you're struggling with. Let's say you read your documentation from your last epic coding session and you see, I was trying to wire up the form for my Rate This Raccoon app and it worked, sort of. You see, it was weird because the data got where I was sending it, but the form data ended up in the URL and that was strange. Cool, you now have a problem to research. And it's not gonna be too long at all once you dig into some form documentation before you realize that you were using the get action on that form and get request put data in the URL. Post requests are the one that keep it hidden in the request body. Now you're just gonna need 20 minutes of puppy cuddle time and you are ready to go fix that form. Now, throughout this talk I've mostly been focusing on how individuals can learn from failure, but it's also incredibly important for teams. There's a famous study that looked at patient outcomes at demographically similar hospitals. And the researchers were kind of confused because they had found that hospitals that have nurse managers that focused on creating a learning culture instead of a blame culture actually had higher error rates. But what made it even stranger was that patient outcomes at those hospitals with higher error rates were actually better. So they started digging deeper and what they found is that the nurses in the blame oriented hospitals were terrified of getting punished and they would try to cover up their mistakes. Now not only did it make it more likely that a patient would be harmed by their mistake, but it also meant that all the underlying issues that contributed to that mistake were never dealt with and the same mistakes would happen over and over and over again. And it is the same story for our engineering teams. You show me a dev team that has a zero tolerance policy for mistakes and I will show you a dev team where the engineers spend a good portion of their time every day covering up mistakes. If you focus on blameless postmortems and reward experimentation, you are going to have a very different outcome for your team. Now, like anything else that you try, the process that I'm proposing may not work perfectly for you the first time around. And at the risk of going a bit to meta, you just need to figure out what isn't working and see how you can adjust it. That's right, you can learn from the failures you learned from while learning from your failures. Just keep learning from failures, just fail. Now, as you get more comfortable gleaning information from these failures, you're going to find that every bug is actually really a feature, it's true, as long as you're able to learn from it, even if you end up deleting every single line of code. Now, traditionally, this was actually the end of my talk and that's why there's this big thank you screen with my contact info. But every time I've given this talk, I've had people, quite like usually minority women, ask a variation of the same question. They say, I have worked for years to be respected in my field. Do you think it's safe for me to fail publicly? And I think if I'm honest, the answer for most of them the ask is going to be no. So I wanna share a story with you. I learned to read when I was three, hold your applause please. It's not that big of a deal, it does not impress people that you can read once you're 36, it's only impressive when you're young. But in the 80s, this was enough to get me labeled as gifted and I got put in gifted tracks in school and my entire childhood, people told me how smart and brilliant I was. Now, that meant that I never had to be worried about not knowing something. I mean, I couldn't be dumb because every adult in my life told me how smart I was. So it didn't bother me not to know something, I would just ask a question. And that stuck with me all the way up through my coding boot camp. Front row, hand constantly in the air. And I took that with me to my first coding job. If I didn't understand something, I felt comfortable asking for clarification. If I thought someone else in the room didn't understand something, but I got the sense that they were embarrassed to ask, it didn't bother me to be the one that asked. So I'd ask their questions too. And that worked fine. There were some weird things like one time in a one-on-one with my manager a couple years in, he said, you're like me, you really have to struggle to learn how to code. And I thought that was strange because learning to code had never been a struggle for me with the exception of a very sticky backbone marionette implementation, but I think we've all been there. It just had come naturally to me. It was one of the topics that I felt was just so easy. And so it was strange, but I was like, okay, well, I don't need to correct his opinion. It's fine if he thinks I struggled to learn how to code. The thing is, is that that was great when things were good. I got plenty of work. There were plenty of interesting problems to solve. And then my computer went off. Well, that's cool. So then one day, the management team decided that they wanted to cut the engineering team down to about a third of its size. And I did not make the cut. Now, my team was full of really, really amazing people. And there's a chance that I wouldn't have made the cut anyway. But I do know that when you have to go down to a skeleton crew, the people that you're more likely to cut are the people that you think struggle that are gonna be more of a drain on everyone's resources that fail more often. So it's definitely something to be concerned about. It's also something that can be pretty devastating when it happens to you. But I wanna share another story. So four years ago, I applied for a technical support job at GitHub. I went through about five rounds of interviews and I'd gone through so many rounds of interviews that I was positive that I had the job. I was so positive that when I read the email that said, we think you're fantastic. And I was like, yeah, you do. But we've decided to hire someone that has more technical experience. I was like, is that me? Are they hiring me? It took me a couple of readings to catch up and be like, oh, no, no, they think I'm great, but they're not hiring me. And that left me a little bummed because I love this company and I wanted to work for them. And I decided, I was stuck trying to figure out what my next move was. And I saw a tweet that said, learn to code in 12 weeks. And I was like, that's BS. But I researched and it seemed like a legit program and I ended up going to boot camp and becoming a developer. Fast forward and six weeks ago, I joined the team at GitHub as an engineer. Now, it sucked to be downsized from my previous engineering job, but the path from not technical enough to get the technical support position to being hired by that same company to be a member of their engineering team would have been much longer for me if I hadn't been free to make those mistakes and learn from them. And you're gonna need to figure out for you what an acceptable level of risk is and adjust accordingly. I had no debt, I had no dependence, I had another income in my household, so the only thing that suffered was my ego. And it suffered, there was some fetal position crying on the kitchen floor, it was tough, but my life was never in danger. I have friends on the other hand that are the first people in their family not to have a service job. They don't have extra income as a programmer, they have income that pays their parents' bills that keeps their grandmother from being evicted from an apartment she's had for 50 years that is now 10 times more expensive than it was five years ago. Their risk is higher. It sucks because being able to experiment and fail and learn from that helps you learn faster and that's what's gonna skyrocket your career. But not if you can't get hired because people are like, oh, you're a failure. So for those people, I would say project confidence and competence, write the blog post as if you're the smartest friggin' person in the world because you flip and are, but make sure you find a place where you can fail safely because lying about mistakes being bad and trying never to make a mistake and blaming yourself for making mistakes is going to keep you from learning. So find your people, if you don't have any, Code Newbies is a great group. I have zero financial interest, but they are fantastic people. You can fail with them. If you can't fail with anyone else, hit me up and be like, I failed so hard and be like, oh my gosh, I too have failed so hard and we will just like, hug it out. So just don't let other people's biases keep you from learning as much as you possibly can. I am going to step off my soapbox and in that here, I wanna thank you folks so much for coming. If you haven't had enough of me talking code yet, you can check out my YouTube channel, youtube.com slash compchomp. I'm also on Twitter at JustRudder. I think you're supposed to say, don't at me, but I love being added, so at me. Cool, does anyone have any questions? Yes, in the blue shirt. Sure, so the question is for people managers, are there suggestions for creating a culture that embraces failure? And yeah, don't punish people for mistakes. I mean, and that sounds trite, but that really is the number one thing. I think there was, Amazon Web Services went down sometime in the past year and it turned out like one dev had made a mistake and it was like a typo and it took down production and then took four hours to bring it back up. And I remember so many tweets being like, oh man, I would hate to be that dev, he better lose his job. He just cost them millions if not billions of dollars. And I thought, I really hope none of the people tweeting that are managers of developers because saying those sorts of things like even in jest is going to make people that you aren't hard of whose careers you play a huge role in just terrified of making mistakes. So the more that you can model that it's okay to fail, whether it's showing an example of like, look here, someone made a mistake. How, like do we have places in our systems where we would make a similar mistake? How would we fix that? Like always putting the focus back on the system that led to the error instead of saying this is a bad developer, that's definitely gonna help. Now there's always business people involved at most companies. So it could be that you're just as a manager going to have to do the best you can within the system that someone else is putting on top of you. So yeah, just kind of using your manager skills as a shield for the people underneath you is one of the best things you can do. Any other questions? I thought I saw another hand, but I could have just been. Yes, you sir. Yeah, so it's suggestions for like ways on a team of highlighting failures in a positive way because they're hard to bring up. I think, so I think it was GitLab that had a major outage and then wrote up this beautiful post mortem blog post detailing everything that went wrong. So I think the first step is finding things like that and sharing them and highlighting like, isn't this amazing? Like look at the anatomy of this problem. Look how they solved it. Look how no one got blamed, but instead they look for a solution. So first like finding places that other people have modeled that. Finding people on the team that feel comfortable modeling that as well. So probably your junior female developer that is four days into her career is not gonna be the person that you should burden with like, hey, why don't you write a blog post about how you brought down production, but it was really okay. She might be comfortable with it, but she might go home and curl up in the fetal position in the kitchen and cry, just maybe. So definitely like finding the people who I think Anjuan did a great talk earlier about lending privilege. And so if you're a senior developer and you're well established in your career and you're well known at the company and you have a lot of social capital, be the one to be like, oh man, did I f up yesterday? Let me walk you guys through how that happened and like find the people that can model that. And especially if you're the manager, I think you can model that, but certainly don't put pressure on any one individual developer because they may have their own reasons for not wanting to be the person that's like, yeah, I make mistakes, even though every developer on every team could have those stories. Give them a big kiss. Now, sorry, definitely don't do that, definitely not with consent. So the question was, as a people manager, what's one of the most comforting things that you can do when someone makes a mistake? So I once made a mistake that cost my company $30,000. I was in search engine marketing. I had an account that was owned by the company whose account we were taking over and the client. And I was supposed to run theirs while I was setting up hours and then when hours went live, I was supposed to turn theirs off and I forgot to turn theirs off and it ran for a month, spending their full budget in our account and their full budget in their account. And when I discovered this, I freaked out because that was about what I was making in a year at the time for the company and now I'd cost them my entire salary and so I was like, I've definitely lost my job. I am also probably going to have this huge bill I'm never gonna be able to pay back. And so I was in tears and I went to probably one of the best humans on earth, my CTO at the time, Dora Patel. And I was like, Dora, this is what I've done and I'm so sorry. And he said to me, it's okay. And that was first of all the most comforting thing. He didn't freak out. He said, it's okay. You know what? I have made mistakes that have cost this company way more than that. This is fixable. And so just having that kind of acknowledgement, first of all, that it was okay, was huge for me because I'm just like, okay, I'm gonna have to get 10 jobs to pay this back and I don't have any job in a squirtle. And just, it's okay. It was just huge. Because when you make a dumb ass mistake like that, you feel like you're the only one that's ever done it and just, yeah, having an acknowledgement, hey, life is not over helps a lot, I think. Anyone else? All right, so I have GitHub stickers for anyone interested. I also have CompChomp stickers. If you're a fan of a YouTube channel, you've never heard of. And I, yeah, and I'm available for questions or anything else afterwards on Twitter or in person. Perce.