 I pull enables your phone to see much like a humans do. It's used by people who are blind or visually impaired to interact with objects every day, whether it's to explore and understand what's around you, or to simply understand what object is right in front of you at any point. I got the idea when one of the first papers in neural image captioning was out. It could describe images with a sentence. This was one of the papers from one of the Google teams in computer vision. And I immediately thought of a childhood friend who became blind halfway through his life because of a hunting accident. And I would walk with this person and describe my surroundings, because without any contextual knowledge, without a narrator, a blind person has no real memories of the places where they go, the things that are around them at any experience that they might have. With an AI that can describe things around you by using computer vision, they can get better memories. They can get better visual understanding and a much richer experience. In this conference, without visual cues, you would not really have that many memories of the setting and the colors and all the things that are going around you at any given time. A lot of the research in AI is collaborative. So without the many researchers at this conference, and also not at this conference who have contributed to this field, this could have never happened. All the blind people that have used AI could have never experienced it. And we, as a small startup, could have never gotten to this point without this collaborative effort. It's really important to say that one of the really cool things about AI is the collaboration of the researchers behind it and how openly everything is being published. We would like to be able to identify almost any object in the world and understand its local context. So it's becoming very easy as we've seen with the ImageNet results to classify an item. To say that there is a dog in the frame. But it's actually very hard to understand the context of it. If it's sleeping, if it's sleeping in a certain area, if there is perhaps a lamppost coming towards it, if you're walking as a blind person, these are non-trivial problems. It's like putting self-driving car software on a person. We would like to solve that, to enable anyone to navigate more safely, and to enable general people to be more smart about the things that are around them. Our eyes are excellent perceptive tools because for millions of years we had to climb on trees and grab things, otherwise we would die. But what our eyes see are not always well processed by our brains. We are ignorant about a lot of the things that are going on around us. People's expressions, what they're wearing, what the objects around us are, the plants and animals around us. This is all a mystery to us that we learned with time, with education, with encyclopedias, and very expensive universities. But AI can actually help us deliver this information right away. As soon as you see something, they can tell you a little more about it. And bit by bit, we hope that it will make people smarter. There might be a risk in the fact that robotics and VR and these technologies that replace some social aspects of life will borrow us from participating in society. And we should definitely address that by making these technologies be more open to the world. Let's take, for example, Pokemon Go, or actually Pokemon as an example. The company has always vouched for video games that encourage people to go face to face when they play. In fact, the very early video games of Pokemon on the Game Boy could only allow you to trade through infrared because the company didn't want kids to trade with each other from across the world. And they wanted people to go and meet face to face, thinking that communication should be more real today. Face to face communication can be done pretty well by using things like FaceTime or Skype. But I think we should try to work more by putting the technology in the physical world, perhaps an augmented reality rather than virtual reality for the more social use cases. From a perspective of someone working directly in AI and in trying to break the state of the art in certain areas, I hope that this summit will help people realize that the amount of work being done to create AI is really non-negligible. There are a lot of the smartest people working on these problems. And they are here not to cause wealth inequality, not to cause unemployment, but to improve the lives of everyone. But I think the most important outcome of it, of that, is that improving the lives of everyone does not necessarily mean making everyone wealthier. The biggest risk of AI is that it is a definite funnel towards the top, towards the creators of this technology to take the lack of work created by simply automating processes and funneling into the body that creates this automation. This must be addressed, and we must learn that we need to find ways to redistribute this income, whether it's universal basic income or through other methods. It's not my role right now to recommend, but I think that the discussion must continue after the summit specifically on this topic.