 Our sixth presenter is Bhuvan Dhingra, whose title is Making Machine Reading Comprehension More Human-like. Quick experiment, how many of you can read the text on the slide? I'm guessing most of you can, because a 1976 study at the Cambridge University found that no matter how you scramble the letters internal to a word, as long as the first and the last letter are kept fixed, people can still read it. Now I work on artificial intelligence, specifically on systems that try to read and understand natural language text. The question that I had was, how do our best systems today perform on texts like these? Well, the answer is that they're terrible. In fact, even a single misplaced letter can be enough to ruin the performance of the best reading systems that we have today. Now, scrambled words is not the only place where AI systems are vulnerable. A quick Google search will show you a long list of failures made by AI technologies in the last year. From misclassifying ethical minorities to getting pedestrians on the road, in none of those cases did the developers of those technologies expect such failures. So why did they happen? Well, most AI today is actually based on a combination of artificial neural networks and machine learning algorithms. A neural network is like a really long list of rules. And machine learning is a way of analyzing lots of data to select those rules which solve a particular task. So for example, the task might be predicting whether an email is spam or not. And the rule that gets selected might be whether the email contains words such as Nigerian prints. Now, neural networks show really strong performance at many different tasks. The problem is that we can't look inside them to see what exactly are the rules that they use to solve those tasks. And unexpected failures happen when the rules which are used apply on some particular data set but are not applicable on a more general basis. Now, in my research, I borrow findings from linguistic and psycho-linguistic studies such as this one to cut down on this long list of rules that a neural network might use. The basic idea is that once we understand how humans do a certain task, in my case reading text, we can eliminate all those rules which are actually not that plausible. And then we can use data and machine learning to select only from the remaining ones. This makes these networks more transparent, more efficient, more reliable, and also requires much less data. Now, both you and I have many long lists of important documents that we want to read, but we never really get around to reading them. From research papers to those long agreements where we click I accept without never really understanding what are we accepting, someday a machine might be able to help us read all this, and my research is a step towards bringing that big closer. Thank you.