 If I were trying to explain what's going on with the field of AI at the moment, I would find myself reaching for 1960s rock lyrics. There's something happening here when it isn't exactly clear. And here's a room full of very smart people who know that this is a field that we have to be paying closer attention to, but aren't quite sure what we need to be paying close attention to. Part of this comes from the fact that we don't quite know what the job of AI is. If AI's job is to compliment human intelligence, we have one set of questions. If AI's job is to replace human intelligence, we have an entirely different set of questions. We also have questions like news quality and quote-unquote fake news, where we can't decide whether this is an AI problem or a human problem or something else. So this is what happens when you play with a really early field. You have people playing with hypotheticals. You have real news come out that sort of destroys some of the hypotheticals and reinforces other ones. And the whole thing is changing in real time as we try to figure out just what is it that we should be worried about. The main thing I'm worried about is that AI is very good at recognizing existing patterns of behavior and reinforcing them. And right now society isn't particularly fair. And so we're going to recognize current societal patterns and then reinforce them. The danger is that we then bake inequality and injustice into technology.