 Happy Friday everyone! This is my first time presenting at a conference, so I'm a little nervous, but you know, let's get started. My name is Kyrie Quasi. It is such an honor to be with you this week. I still can't believe that I was, um, I was invited to speak at this conference. I've met so many inspiring people this week. It showed me that there's so much for me to work. Okay, a little bit about me and why I'm here. Earlier this year, the Winix Foundation contacted me after the San Francisco Chronicle featured me in an article. Today, I'm 10 years old. I am a sophomore in college, and I started college when I was 9. I'm triple major in math, computer science, and possibly physics if I can squeeze in all the credits. I'm also proficient in over three, twelve programming languages, and I've earned three certifications in deep learning. And possibly the coolest thing ever, I spent this summer as a research mentee with Intel Labs, with the anticipatory computing lab of Intel Labs, where the team is developing several game-changing AI technologies. My team is currently working on cutting-edge AI capabilities to support people with disabilities. My mentors give me so much hope for careers in technology that also have a big mission. I have hope that in my generation's lifetime, we will witness the infinite possibilities of AI to revolutionize our socioeconomic institutions and deliver equity to marginalized communities. At Intel Labs, my lead mentor is Wama Nockman. She's an Intel Fellow and Director of the Anticipatory Computing Lab. In 2012, Wama was a team of researchers to develop a new software platform and send things to help the way Dr. Stephen Hawking communicate. Wow, imagine working with Stephen Hawking and giving the world a chance to hear his voice and access his thoughts. It's like a dream. However, the team also realized that there was an unaddressed need in this space. They saw a need for an open and configurable platform for assistive communication. A new type of system that researchers and developers can build upon to bring new innovation and help more people gain access. See, the issue today with assistive communication platforms is that the economies of scale don't work. Relatively speaking, the number of people with disabilities is not that large. And more importantly, solutions need to be tailored to each specific case as you can't just use the same solution for everyone. So this is why these assistive communication systems end up being very expensive, think thousands of dollars per unit. This makes them inaccessible for many people. So the Intel team decided to build a configurable platform and release it to open source. This way, people can easily build on top of it and customize the system to the specific needs of each person. An open source platform makes it much easier to build tailored systems for people with different needs. ACAD, the Assistive Context Aware Toolkit, is a software platform that enables people to access their Windows applications with a simple trigger that can be mapped to any muscle movement. While graphical user interfaces are easy to understand and are very intuitive, if you can't move a mouse, they become a nightmare. Essentially, you're scanning a two-dimensional space searching for a point, but what you really want to do is choose one option out of many. If the computer can track your gaze, this becomes easier, but for many that's not an option. Certainly for Stephen Hawking, that wasn't an option at all. So in ACAD, many functions are automated and served as options to select from menus. Whether it's opening a file, surfing the web, or giving a lecture, et cetera, et cetera, you get my point. Depending on the context of the application, the appropriate contextual menu surface to reduce their reliance on the mouse. Developers can easily build new menus for new applications. ACAD also supports many languages. To enable a new language, people can translate the menus and train a language model with their appropriate text corpus. After working closely with Dr. Stephen Hawking for seven years to discover appropriate solutions for people with disabilities, many different functions were built to support them. And since different people might be able to access different muscles, ACAD supports many different sensors. A cheek sensor, ring sensor, the assistive switch, which is just a push button. Developers can easily add new sensors in just a day to tailor the system further. And one of the cool things that I worked with was the brain-computer interface. An amazing thing about this is that if a user can't move any muscles to trigger a sensor, then BCI can still read their brain waves and use that as a trigger into ACAD. In the current version, ACAD was used in two ways. The word prediction system and the perception system, if you're using camera or BCI as a sensor. However, while this allows people to communicate and speeds up typing dramatically because they only have to type the first few letters and then rely on word prediction, it's not fast enough. If you spoke to Stephen Hawking or any other person using assistive computing, you will notice a long pause after you say something before they finally end up responding. This is because forming these sentences, whether you're using gaze or anything else, is still very tedious. So the project that I worked on was using ACAD or AI in a different way. What if we listened to what the other person was saying, converted it to text using ASR, and then tried to predict what the user might say and serve them as options that they can choose or modify. This will truly start to feel like a real conversation. And the beauty of this is that as the user continues to use the system, the system will learn from previous examples and continue to improve over time. This is the promise of true human AI collaboration, where it can bring the best out of both sides and help disabled people thrive and not just cope. ACAD has the potential to change the lives of people with ALS and other motor neuron disabilities. As we add new features, we are making ACAD more dynamic, more iterative, and easier to use. This in turn makes it more accessible. It is my fervent belief that in the near future, technology like ACAD will be sophisticated enough to help people with physical challenges and disabilities lead more independent lives. On a final note, I would like to state something personal. I'm standing here today only because people in influential positions took a chance on me. They changed my life. As a result, I have had the chance to appear on numerous television shows and publications and share my story with people around the world. The administrators at my college could have looked at me and said no. Lama Nakhman and Intel Labs could have looked at me and said no. Without visionary leaders who are willing to use their influence to create opportunity for equity, what happens to kids like me? We're forced to fend for ourselves in a world that ceases no more than our age. I implore you today, please take a risk on people who don't fit in any entity box. Please use your influence to say yes to someone. Please look out for my feature in the fall issue of the MIT Technology Review. Also, please follow my YouTube channel where I discuss issues in science and current events. I need 1,000 subscribers so please help me get there. You can also follow my journey via Twitter and Instagram. Thank you so much!