 If you have seen enough TED talks to start thinking, we're approaching a limit on altruism. We're running out of efficient ways of being good to others. Well, you ain't seen nothing yet. Let me give you some food for thought. This friend of mine always wanted to make the world a better place. In her childhood, she planted trees. Then growing up, she studied ecology and would never use plastic bags. Eventually, she had to decide which profession she wanted. She decided to be a doctor and studied hard for three years in a row trying to get into med school. Unfortunately, she didn't make it. She went on to become a successful engineer and heard of a great idea which I'll tell in a minute. Another friend actually managed to go into med school and now spends most of her time as an oncologist helping people fight cancer. In terms of absolute benefit to the world, it would seem that my medical friend had a greater impact. As I mentioned, my engineer friend had heard a great idea. And the idea was that she could donate part of her salary to fund a medical school in such a way that they would have an extra student of medicine. Now let's do the math here. Say each doctor saves about 100 lives during a lifetime of work. That means my doctor friend saved 100 lives. While my engineer friend, by creating a new doctor, saved 102. Now let's pause here for a sec. There's something counter-intuitive acting here. You begin to see it when you consider that had my doctor friend not managed to get into med school, someone else almost as good would have. This kind of reasoning is called counterfactual reasoning. Intuitively, we tend to evaluate the consequences of our choices. Doctor, 100 lives saved. Engineer, 100 lives saved. I urge you to ask instead, what would have happened if I had chosen otherwise? If someone else got my place as a doctor, say 99 lives saved. If someone else got my place as an engineer, no donation to med school. Thus, virtually, no lives saved. So in this particular example, being an engineer and donating has a much greater impact than being a doctor, despite our intuitions to the contrary. You now think I'm against doctors. Quite the opposite. You could become a doctor and donate twice what the engineer does, saving 200 lives. This is not a ordinary dispute between career choices. It is just a way of reasoning you should apply to become more effective at something. In this case, altruism. The great thing about counterfactual reasoning is that it's not a huge project that costs millions to implement. It is just an idea which, as soon as you've heard about it, will stick with you through life and make you a better person, even if you don't care. Now, let us look at a different example. There are a lot of very smart folks these days talking about the technological singularity. It marks the moment in time in which artificial intelligence gets really good at, of all things, making really good artificial intelligence. Say we have an AI, smarter than humans, that makes itself ever smarter, not at our engineering speed, but at the almost instantaneous pace of silicon chips, rewriting its own source code, its own mind. How fast could its intelligence grow? Probably too fast, and it's likely we couldn't stop it from implementing its values, be they bad, good, or irrelevant to our collective concerns. Now, here's the counterfactual issue. When you first think about this, intuition will give you a straight answer. If we don't build AI, we're safe. This is too risky a toy to play with. Intuition thinks causally, I don't do X, therefore, X doesn't get done. Now, if the people who are concerned about safety back off from AI, then the other people who are not as acutely conscious of this problem will remain there. Here's what happens. The groups that will actually end up creating this brilliantly self-engineering AI or singularity will be the guys that don't care about safety. We'll have a negative singularity and a powerful AI with undesirable values. The most likely prospect is human extinction. The good choice, counterfactually, is to instead encourage the people who are on AI research and tell them to be extremely cautious and quick before someone else unaware of the huge impact AI will have causes unprecedented chaos. To reason counterfactually, ask, what would be different if I did X from what would happen if I did not do X? This will put you ahead, and sometimes being ahead can really make a difference. Thank you.