 We have quite a few people saying that artificial intelligence is potentially life-threatening existential risk for society, for us as humans. And I would tend to agree, except for on the time frame of this, you know, right now what we're seeing is that most of the intelligent machines are really fancy software. We're still not at that point where we can say, okay, that is definitely going to redo the fabric of humanity. The next level is artificial superintelligence, ASI, and that is called the intelligence explosion, which is computers making decisions, cutting us out, sidelining us, or eventually just completely treating us like pets. The much more burning question today is not if machines will take over, but if we become too much like the machine, you know, by essentially the machine calling the shots and then we just kind of say, okay, you know better, right, that is a much bigger danger. In my book, Technology vs. Humanity, I have the final conclusion is that we should embrace technology but not become it. And this is a very important message because there's no way that we can go back on technology and put it back in the can, that option really isn't available to us. So the conclusion is that we need to use technology to our purposes so we can stay on top of it, we can be in the loop, it can benefit us. But we should not let technology take over the core of what we are, our thinking, our relationships, our engagement, our emotions, and we should not reduce our humanity because it may be better fit for machines. In my view, the future is not about us becoming more like a machine, you know, exponentially powerful, so to speak, but to become exponentially human. We need to decide as to who we want to be, do we want to be technology, or do we want to remain human.