 What are the risks of artificial intelligence, and what should we be doing about them? The idea of AI has been around for decades, but recent advances have completely transformed what we can achieve with the technology, and as a result, many experts believe we're teetering on the edge of an AI revolution. This is fantastic news if you're struggling with really knotty problems like out curing curable diseases, or solve world hunger, or designing safer and faster ways of getting from A to B, or even transforming how people learn. Here, smart machines are already paving the way to doing things which would have been impossible without them. But there are downsides to creating machines that can think and act way faster than humans, and often in ways that we don't understand. Also this, we need to be thinking now about what might possibly go wrong so we avoid potentially disastrous mistakes in the future, and this is more important than ever with the rise of transformative AI technologies like GPT-4 and other large language models. As we grapple with the coming AI revolution, here are 10 things we should probably be thinking about now. 1. Machines that make it harder to think for ourselves. Of course, apps like chat GPT, Bing, and even Google are already doing that to some extent. But the more we rely on smart machines to do stuff for us, the more we risk losing skills that we may one day regret. 2. Machines that take away jobs. This is a huge issue as smart machines begin to outperform smart people. The big question is whether we can keep on inventing jobs that humans do better, or if we need to consider a future without paid jobs. 3. Machines that learn from our worst habits. As we train smart machines, will they end up as opinionated, biased, and antisocial as some of their human trainers? The evidence so far is that unless we work hard on avoiding this, sadly they will. 4. Machines that make decisions we don't understand. Imagine failing a job interview or flunking a class or being fined all because a machine made a decision that no one could explain, but you had to live with it anyway. Scarily, we're already on the slippery slope of giving smart machines the authority to make such decisions without being accountable for them. 5. Machines that don't understand what's important to people. Imagine an AI that has the power to change your life, but it has its own ideas of what's important and what's acceptable. Great if you both agree, but a nightmare if you don't. 6. Machines that kill people. Of course, machines kill people all the time, but usually there's a person making the final decision. What happens when we replace that person with another machine? 7. Machines that alter their own instructions. Interestingly, we want machines that can think for themselves when solving complex problems, but what if they get too smart? 8. Machines that make smart, dumb decisions. We know that smart people can make decisions that look really stupid with hindsight, especially when they can't appreciate the full consequences of their actions. Sadly, from what we know of complex systems, smart machines are likely to have exactly the same problem. 9. Machines that decide we are not needed. What if AIs get so smart that they realize that people are a waste of space? The good news is that we're probably a long way off from super-intelligent machines that are disdainful of mere human beings, but it's worth thinking about just in case. And 10. Machines that use our human weaknesses to control us. What are the chances that we create machines that are smart enough to understand our human foibles and use them against us? If we're not careful, we won't fear our AI overlords, we will worship them. Of course, despite these and other risks, AI could make our lives amazingly better if it's developed responsibly. But this won't happen if we don't think about the possible downsides up front. The good news is that AI developers are already beginning to do this, but we're going to need a whole lot of input from experts in many other areas if the benefits of AI are far outweigh the risks.