 Hi, I'm Charlotte and I'm here to talk about artificial intelligence. When you hear AI do you think killer robot? That's an archetype. It's like if I ask you to think of a word that starts with C do you think of cat? That first sleeve of your brain often isn't true and it often hides things that we've grown up with. Like maybe you're not a racist but you've grown up in a system that has racism. You might exhibit some behaviors that you have to overcome. You have to dispel the myths that are the archetypes in your brain and AI is that way. The killer robots are not what you think. AI is not weaponized and the vast majority of AI is just algorithms with data and computers. But we've grown up with stories like Robbie, Lost in Space, Hal 3000, The Hound from Fahrenheit 451, The Terminator, The Matrix, and Detroit Become Human depending on what generation you were in. And we think that those are AI but our overlords are not coming. There's no evidence that we have artificial general intelligence. These computer systems are not making plans on their own. They don't have any autonomy. They don't have emotion. They're not alive. They are only as good or as bad as the tasks its human creators assigned to be accomplished. But people are building autonomous lethal robotics on their own. These are computer vision and navigation systems that can move independently and target selectively. They do have the capacity to kill through on-board weapons. So if you take a robot like Boston Dynamics Spot or the parkour robot that you see here and you can see that it moves in some ways better or faster than humans can. It has a limitation. It's battery and some of the navigation is remote controlled but it can navigate difficult terrain. It can jump. It can go into dangerous areas. And these robots are being designed to do good work for people like search and rescue, dealing with land mines, fighting fires, working in dangerous construction sites or cleaning up nuclear waste. But we also have soldiers in Arizona that are flying drones in the Middle East. They're eliminating potential targets remotely and they're going home to dinner. Some people say that autonomous lethal robotics will save soldiers' lives. We know that military technology moves into our local police forces. So we need to ensure that autonomous lethal robotics are regulated through ethical, political and legislative processes. The most important thing is that you're informed that AI can be used in this way by people making decisions. In 1980, we had blinding laser weapons. They were banned from the battlefield across the world. In 1997, we banned chemical weapons. And even before that, in 1928, we had the Geneva Convention. So political action and treaties can work. Once you get the cat out of the bag like nuclear, it's really hard to stuff it back in. So what we need to do is share your awareness of autonomous lethal robotics. Write to your representatives and make some noise. We can create commonsense legislative protections so that people aren't allowed to build and deploy systems that decide life and death on our behalf. And we can use AI for positive change.