 How can we make computers see the way humans do? Vision is a critical sense for humans. By some estimates, we perceive up to 80% of all of our sensory information with our eyes. It therefore stands to reason that if we want our computers and robots to be able to understand the environments that they operate in, that vision is going to be a really important part of the puzzle. Computer vision has been a really active research area for the last couple of decades. In some tasks, such as reading printed text, computers can achieve accuracy rates of 99% or higher, in some cases even outperforming humans. Accuracy means interpreting images correctly, whether that's detecting the number of cars on a road, identifying a flower, or tracking people in a lunchroom. Most of the research is just focused on pushing those accuracy rates up one or two percent at a time. What's the catch? Well, most of these algorithms are really, really slow. So for example, there is a recent algorithm that is using surveillance applications, that can be used in train stations or airports. It can detect individual people in crowded scenes with up to 90% accuracy, but it takes six minutes to process one second of footage. What's the point of detecting a potential criminal or intruder hours after they were actually there? Just like in this competition, time is off the essence. In my research, we're using a computer architecture theory called hardware software co-designed to help alleviate this problem. Most computer vision algorithms process images one pixel at a time, moving sequentially through an image. But we want computers to see the way humans do, and does the human brain do this? No. With billions of neurons, we're able to process images really, really fast by processing parts of the images independently at the same time. We can try to emulate this by building thousands of specialized electrical circuits that each get instructed to process a small part of the image in parallel. By combining this with a typical computer processor, we can achieve a good balance between high execution speed and high execution flexibility. In an initial study, we were able to take a gender recognition algorithm and speed it up by three times without losing any accuracy. Most of the research in this area has focused on only one interpretation of our key question. How can we make computers see correctly the way humans do? But there is another dimension that needs to be taken into account. How can we make computers see as fast as humans do? And that's something that we have to address in order to bring these computer vision algorithms out of the laboratory and into the real world. Thank you very much.