 Hi, I'm Charlotte, and I'm here to talk about artificial intelligence. Did you know that one of the primary goals of AI systems is to interact naturally with people? These are AI systems that learn human patterns of speech, our gestures, and our affects, and then our program to respond appropriately. Natural interaction may be used to create interfaces that feel normal to us so that we feel comfortable using the technology or to process human movement and speech patterns that keep us safe. There are three common areas of natural interaction. The first is speech, anything to do with talking. The second is gestures, hand and body movements. And the third is affects, tiny movements and inflections that cause us to feel something, like how we believe an actor on the movie screen. So the first area, speech, could be text to speech, could be speech to text, it could be Google duplex, making a phone call on your behalf where the person on the other end doesn't even know that a computer has called them. It could be auto-correct, when you are sending a text, it could be Gmail that fills in your email with the speech patterns that you've used in the past. And it can be a screen reader where someone has sent you a document and it reads it out loud to you. Speech also includes translation, but don't use it for your homework because the culture and the nuance is lost. It's okay in a pinch though if you're trying to talk with someone in another language. The second area of natural interaction is gesture recognition. This is any way that you move your hands or your body to communicate. The most popular least expensive technology that you may have encountered is the Xbox Connect, where you're dancing or playing a sport and your movement is recognized on this computer screen. You can also see sometimes in movies where they resize a window, something like that where you're just showing with your body. That's not quite here yet, but it's coming soon. Also, these gesture recognition systems can recognize patterns in how people move their bodies to identify a shoplifter or to protect a crowd from someone who may be violent. The third area of natural interaction is affective computing. That's within A, and it's the study of how computers can process human emotions. This is our tone of voice, the tiny movements that we make to convey how we're feeling, and our facial features that show that we're happy, we're sad, we're angry, we're excited. We often think of this in our mouths, but you can see it in the eyes too. Look. Right? And we can use these gesture recognition systems with affective computing to tell if a driver is getting sleepy, if they're texting, if they've been drinking, and that'll keep people safe on the roads. Natural interaction is an area of computer science research that makes AI systems accessible to everyone, and it processes the things that we do automatically, like tone of voice and positioning of our bodies into a format that computers can understand. Then programmers can write systems that interact with us in a way that makes us comfortable and safe.