 Are people really going to get upset if I make an apocalypse joke here? I mean, come on, it's not the end of the world. Enrico Fermi was a brilliant physicist who's sometimes called the father of the atomic bomb. He's kind of like Descartes in that he has a whole Wikipedia page dedicated to stuff that's named after him. One of those things is Fermi estimation, a technique for estimating an answer to a problem with many unknowns which Fermi was famous for. In one famous example, Fermi was having a conversation at lunch with some colleagues at Los Alamos when the topic of extraterrestrial intelligence came up. You know how nerds are, it's always Star Trek this and the theoretical basis for nuclear power that. The conversation spiraled off into other topics until Fermi interrupted everybody by asking out of flesh field, where are they? He had crunched some approximate numbers in his head and had decided that the universe should be teeming with alien civilizations that we should have detected by now. And yet, silence. This famous speculation has also been named after Fermi. The Fermi Paradox has also been known as Fermi's Question or The Great Silence, which is now the name of my John Cage cover band. Quick note before we delve into some of the ramifications of Fermi's Paradox, I'm just going to put a big ol' asterisk right there. We don't have any independently evolved intelligent lifeforms around to compare human beings to, so any projections about alien intelligence are necessarily trying to extrapolate from a single data point with many controversial assumptions. However, astrophysicists predict that there is a good chance that there are over 60 billion habitable planets in the Milky Way galaxy alone, which is only one of 100 billion galaxies that we can see. That's a crazy number of chances for intelligent life and a huge deficit of activity to make up for. It's like we're standing in the middle of Times Square and there's nobody else there. Spooky. There are many theories that have been developed to explain Fermi's Paradox, which basically fall into three main categories. We're special, we're stupid, or we're screwed. First, it might be that Earth is just ridiculously specialized for intelligent life in a way that we haven't really discovered yet. Like, one in one septillion planets in the universe specialize. Unlikely, but possible. Second, we might just be being stupid. Alien civilizations might exist, maybe even close by, but maybe we're trying to communicate with them in the wrong way, or maybe they just don't have any interest in talking to us. But perhaps the most disturbing answers to Fermi's Paradox suggest that alien civilizations have existed right next door. Just, none have survived long enough to make contact with us. It might be the case that alien life everywhere in the universe follows similar paths of development. They reach a certain point and then they stop transmitting. Maybe intelligent life has a tendency to destroy itself. Now that's a pretty big leap from it's too quiet, maybe. But human beings have no shortage of theorized apocalypse scenarios. Wikipedia even has a helpful catalog of them on a page entitled Global Catastrophic Risks. Man, Wikipedia, I love you, but you really creep me out sometimes. Many of the entries on that page are pretty familiar tropes from science fiction, like malevolent artificial intelligence or runaway nanobots, which reference some budding technology that has a potential that we haven't fully unlocked yet. Now, I'm as much a futurist and a technological optimist as anybody on the internet, and I'm a huge scientific discovery fanboy, but even I recognize that we don't always use the power of technology wisely, or even sanely. You can debate the assumptions of Fermi's paradox all day, but a powerful technology that ends all human endeavor? That's not just sci-fi, that almost happened. I mean, atomic energy, the very first potentially world-destroying power that we discovered, that Fermi helped to discover, actually came within a hair's breadth of ending all human life. If it weren't for Stanislav Petrov, a Soviet duty officer who decided not to believe what turned out to be a false alarm in a nuclear silo, we wouldn't be on YouTube right now, we'd be in Fallout. Not playing Fallout, in Fallout. And as the pace of discovery and development increases exponentially, it's becoming harder every day to figure out just how we're supposed to keep Petrov's choice from becoming a more and more frequent thing. We're already innovating faster than legislation can keep up with. I mean, we've had smartphones for the better part of a decade, and the Supreme Court has just decided that police officers should need a warrant to search them. To put that into some perspective, smartphones had already played an instrumental role in the era of spring three years before the American legal system figured that out. So depending on all legislatures all over the world to keep pace with stuff like self-assembling nanomachines and to pass smart laws fast enough to keep them from being a problem, that might be asking a lot. So what can we do? What resources are actually available to keep us from adding to the great silence? Well, we might partially depend on the people smart enough to develop these technologies to know what to do with them, or more importantly, what not to do with them. In the wake of the bombings of Nagasaki and Hiroshima, many of the scientists and researchers who were part of the Manhattan Project campaigned against the use and proliferation of nuclear weapons. Unfortunately, not very successfully. Here's a quick piece of advice. When a group of individuals noted for their brilliance publicly announces that something that they've spent four years of their lives working on is a threat to world safety and should never ever be used again, maybe listen to them. On a happier note, we weren't ready for all of the ethical concerns surrounding human cloning, so geneticists around the world sort of agreed to hold off on that kind of worms until we had a plan, and that actually worked. Another option might be to create review boards of experts in certain technologies to anticipate potential existential threats and to make sure that we steer clear of them. For example, Google recently created an ethics board for artificial intelligence who will be overseeing their development of this new technology because it might be really, really dangerous. That's just for Google, but how about an ethics board for every branch of science and technology? A group of unbiased, qualified experts who can put a hold on any corporation or lab that might pose a threat to humanity. These solutions certainly aren't bulletproof. That list on Wikipedia is long, and it's going to keep getting longer. But hopefully, we can use some of our drive for innovation to figure out something so that centuries down the road will finally get a transmission from an alien civilization who's been listening to our broadcasts for a long, long time. I imagine that they'll say something along the lines of Could you produce Sherlock episodes any faster? Please. Do you have any ideas for how to prevent humanity from potentially destroying itself? Please leave a comment and let me know what you think. Thank you very much for watching. Don't forget to blah, blah, subscribe, blah, share, and I'll see you next week.