 When the subject gets into the MRI, we're trying to collect the brain scans from the subject while the subject is watching the movies. We take brain scans every two seconds and we look at the whole brain to see how the brain activity goes up and down at every place in the brain. After we acquire the brain scans from the tumor subject, we use deep learning models to reconstruct what the people see. And what is more interesting is that the model also can predict how the brain interprets what the person see. For example, when there is a person, then the model can predict it is a person. What's new in this study is that for the very first time we showed that we can use a convolutional neural network to understand how the brain process information while a person is watching a video. A major goal is to use machine learning algorithms to help understand the brain and then to use what we understand about the brain to advance artificial intelligence.