 You know I've always been obsessed with big questions and the bigger the questions, the more obsessed I've been with them, you know, what's going on with ultimate nature, reality and so on and the two biggest mysteries in science, the way I saw it when I lay on my hammock as a teenager between two apple trees was the mystery of our universe out there, I've just spent the first 25 years of my career on and then our universe in here, intelligence, consciousness, mind, which I've been working on in recent years at MIT and in the very big picture then what's happened is our universe has woken up, become aware of itself and some of these self-aware parts of our universe namely us have managed to figure out so much now about how it works through science, they were beginning to get better and better at building technology to actually act back on our universe, right? Earth looks very different now, at least the Boston area does from million years ago because of what we can do with our technology. We talked earlier about how we've even discovered now that we actually have the potential to do great things that help life spread to our universe, help our universe wake up much, much more. We also of course have the potential to do the opposite, we're on the cusp of technology that could just eliminate life completely and make our universe permanently go back to sleep. So which, so in some sense I feel that after 13.8 billion years our universe has reached this interesting fork in the road. We can take the life route or the death route and I think there's no more exciting question to work on to try to make sure we make the right choice in this fork and the way I think about it is that I'm quite optimistic that we can create a really inspiring future with technology but that's going to require winning what I call the wisdom race, the race between the growing power of the technology, exponentially growing power of the technology with the wisdom, with which we manage this tech because some people these days I find technology is to be their new religion. We basically worship it and say technology is good and technology is morally good, the more technology we have the better automatically. But it's important to remember that technology is not actually morally good nor is it morally evil. Technology is just a tool, an amplifier of your power to do good or evil. And that means that the more powerful the tech becomes, the more important it is to think about how you steer it, what you do with it. Sometimes people ask me about powerful tech like AI and ask me are you for it or against it and I always ask them how about fire? Are you for fire or against fire? And that shows how ridiculous it is. Fire isn't evil or good but you know I'm all for using it to keep this house warm in the winter and I'm all against using it to burn down our neighbor's house. So how can we actually win this wisdom race then and make sure that we develop our wisdom for technology management fast enough that we do good things with our tech? The big thing we learn by looking at history is that we've always, in the past, always used this strategy of learning from mistakes. First we invented fire, screwed up a bunch of times. Then we invented the fire extinguisher. First we invented the car, screwed up a lot of times, a lot of people died. Then we invented the seat belt, the airbag, the traffic light and laws against driving too fast, laws against driving when you're drunk as a skunk and stuff like this. So the wisdom was always reactive. There's no room for reactive wisdom. Well, there was with fire and so on. Yes, there were a lot of tragedies but I think we more or less ended up in a situation today where fire is much more positive than negative in its impact except for climate change. But there's a catch here because technology is getting exponentially more and more powerful, right? And at some point it crosses the threshold of power where learning from mistakes actually goes from being a good strategy to being a really, really bad strategy. And I feel we've already crossed that threshold with nuclear weapons and we've earned the verge of crossing the threshold with synthetic biology which can be used fantastically to improve our health or to create engineered pandemics and we're totally going to cross that threshold with artificial intelligence if we succeed in building artificial general intelligence that can outsmart us in all ways. So which means that now is a time to shift mindsets in this wisdom race away from just saying we're going to keep learning from mistakes to instead being proactive and the things you're working go wrong to make sure we get it right. And this might sound kind of obvious to you since I see you are nodding, but it's funny because people sometimes tell me, shh, don't talk like that. That's luddite scaremongering. It's foresight. It's having foresight and figuring out how to work together across the countries of the world and the scientists of the world, the ethicists of the world. This is extremely important. And nerdy MIT people like me, we call it safety engineering.