 I'm Surush Visuki. I'm a postdoc at the MIT Media Lab. I also live my PhD at MIT and I've been a fellow here at breakfast since September. Well, the question I'm really interested in and what I'm working on as a fellow at breakfast and also as a postdoc at MIT Media Lab is interventions to dampen the effects of misinformation on social media. My PhD focused on automatic detection of rumors on social media and right now I'm interested in intervention strategies. So one idea I have is maybe an automated tool like a bot on Twitter and Facebook that would detect misinformation using my algorithm that I developed for my thesis and then contact people who are on the path of the misinformation to let them know that they might be exposed to this thing, kind of vaccinating them before they're actually exposed to the virus of misinformation. Now I think it's become pretty obvious that rumors and misinformation in different domains are super important and damaging to society, specifically rumors in the political domain. They undermine the core democratic values of our society because if you don't have a shared truth with the other people who are voting in the same election as you, then you're not judging the candidates based on the same facts. I think technology is always neutral, almost always neutral. So it can be used for good or evil. And what excites me and what makes me fearful is actually the same technology, which is specifically recent advances in deep neural networks. A lot of the problems in classical AI have already been solved using this new method in the last decade. So problems that we thought we would not be able to solve for maybe a century, we've already solved. So that's really exciting. But again, the same algorithms and systems that we use to solve these problems are big black boxes. We never know exactly what goes in them. And so if you give them too much power to govern our society, they might actually make decisions that you never understand and not really be able to interpret. And that scares me.