 Designing mechanical linkages can be a really difficult and finicky process, but I have to say, it does have its moments. In my day job, I'm a mechanical engineer, which means that I spend a lot of my time trying to figure out how to build mechanisms and structures that are safe, effective, and importantly, as cheap and easy to manufacture as I can make them. As an old adage puts it, an engineer can build for a nickel what any fool can build for a buck. That last bit might sound a little weird because it's not part of the popular conception of what engineering is. Whenever people talk about engineers in media, they tend to focus on things like precision and exacting quality. They're often portrayed as people who'd use calipers in a straightedge to ensure that their PB&J sandwiches are cut precisely in half. If you hear the word engineering used in commercials, it's almost certainly not being used to mean we made this for as little money as we possibly could. Precision is definitely part of what engineering is, but it's probably the inverse of how you usually think about it. There's one concept that I think describes the whole philosophy of the discipline admirably. Tolerance. Let's say that I have two metal plates that I want to attach to each other using a bolt and a nut. I want these edges to line up perfectly. Obviously I'm going to need to drill a couple holes in the plate so I can put the bolt through them, but how big should those holes be? If we were trying to be sticklers about precision, like the popular conception of engineers might have us believe, you might think the answer would be as small as possible, almost exactly the same size as the bolt. No joke, that's the answer that I gave on the first day of class. The thing is, if the holes are both exactly the same size as the bolt, everything else has to be absolutely perfect or the parts won't go together the way we want them to. If the holes are in slightly different places, if they're at a tiny angle, if they're out of round or a bit smaller than they should be, if it's a little too cold or too hot, the whole thing won't work. That's a nightmare for the people in charge of making these parts. They have to carefully measure and control every single step of the process. It's going to take forever and be super expensive. As weird as it sounds, to properly specify these parts, it's an engineer's job to figure out precisely to the tiniest fraction of an inch exactly how screwed up those holes can be and still have everything work the way that it's supposed to. That means finding a tolerance, a lower and upper bound on the hole's location and size, which will always allow the bolt to pass through and fasten the plates together. If you're thinking about it that way, you really want the bolt holes to be as big as possible, to allow the maximum amount of slop in drilling them out. Precision still figures into that process. It takes a lot of careful planning to find the limits of just how badly a part can be made while still working as intended, but it definitely feels backwards from the way that that's usually portrayed. Ever since I learned about the engineering philosophy of tolerance, it's informed a lot of how I think about the world and it flipped a lot of my initial perceptions on their head. For example, take the Roman aqueducts. These massive stone structures have stood for thousands of years against the elements. After the empire they were built to supply had fractured and ultimately lost use for them. They are beautiful, inspiring works of ancient engineering and they're horrifically overbuilt. The Romans could have built these structures much less robustly to last, say, 200 years, and instead use those resources to, I don't know, make more roads, build better walls, keep the military and citizens happy. Instead, they built these things to last, millennia longer than they would end up lasting. From the standpoint of tolerance, they spent too much, built too big, and paid for it. Another example. This is a video from a wing flex test of the Boeing 777 in the mid-90s. The wing was designed to withstand 150% of the expected maximum load, one and a half times the rated weight capacity of the aircraft. You can see it bent in a way that you never, ever want to see outside of an engineering test, but it makes it to 150%, no problem. At 154%, you can see the engineers celebrating this failure and rightfully so. Without any modern simulation software, they were able to calculate exactly how thin to build every spar, how few rivets were needed, every aspect of the wing necessary to just barely brush past catastrophic failure. If the wing had made it to 160% or 200%, we might think of that as a good thing. After all, it's exceeding expectations. But an engineer would see it as a failure to predict exactly how much strength was necessary, an improperly-tolerance design. Of course, there are factors that limit where that sort of thinking is useful. We can only design down to the wire if we know exactly where that wire is. Systems that are subject to large unknowns are, in all likelihood, going to be over-engineered just because we don't know how close they'll be to the worst-case scenario and we don't want them to fail if that's where they end up. The more uncertainty there is about the materials or the conditions they'll be used in, the harder it will be to find appropriate corners to cut. But it's also surprisingly helpful in numerous areas outside engineering. Programmers, doctors, business owners, race car drivers, everyone who deals with the balance between investment of resources and satisfaction of certain criteria can benefit from some careful analysis of exactly how screwed up things can get before they'll stop working. But what about when things get even more screwed up than that? Engineers are aware that sometimes life happens. Even the most well-thought-out and robust designs, with plenty of extra room built in for the most extreme situations, will unavoidably be placed into scenarios that nobody could have anticipated. Elephants sit on lawn chairs, rockets explode, and an ever-increasing number of items get placed into a 100-ton hydraulic press. Really good designers plan for every eventuality, even the ones that they haven't planned for, by building things to fail gracefully, to stop working in the most user-friendly fashion possible. This also might seem counterintuitive at first. If you're expecting something to get broken, just design it so it doesn't break in the first place, right? But engineering for graceful failure acknowledges that not everything can or should be built like a tank, and even if it is, there's always a bigger tank ready to flatten it. Better for something to break in a reliable, safe, or even repairable fashion than to simply explode in a shower of well-tolerance departs. A great example of this is what's allowing me to talk to you right now. The TCP transmission control protocol that allows computers to form networks, as in the internet, is built to gracefully accommodate failures in transmission of information. If the connection between computers is noisy or lossy or congested, and a packet gets dropped, nothing melts down. The user doesn't even get any errors. All that happens is the protocol informs the transmitting computer that its last packet didn't make it through, and it'll have to try again. On the hardware side, commercial door locks fail gracefully and deliberately when they're involved in fires. You can't really expect a lock to operate normally when a building is burning hot enough to melt steel, but lock manufacturers will often include components inside the lock that are made of low-temperature plastic, which will quickly melt away and leave the lock open, making it easy for anyone trapped in the building to escape without issue. Sure, we could engineer every lock out of some ridiculous super high-temp material that would keep working in the middle of a volcano, but it makes a lot more sense to simply design it to fail gracefully. This is another one of those engineering principles that has profound implications for numerous other disciplines, or even life in general. We all take risks, and sometimes things simply break apart in unfixable ways. Having a plan for how to make abject failure bearable can be a great way to ensure that even when everything goes wrong, the result won't be too catastrophic. Just ask Zanatos from Gargoyles. That dude knows how to fail gracefully. His plan Bs are better than any plans I've ever had. Do you see any value in these principles of engineering for your pursuits? Please leave a comment below and let me know what you think. Thank you very much for watching. Don't forget to blah, blah, subscribe, blah, share, and don't stop thunking.