 For this week's societal impact, we will look at hate speech detection or more generally at detecting hate speech or not safe for work or any sort of post which companies or individuals or governments may want to block. Now most large companies now have deep learning systems that detect these are images or posts that we would not like to have shared. They're used for advertising placement. Do I want my ad to show up on this web page or would it be potentially embarrassing to have my ad there? And they're used for messages within companies. Is this a message that might be potentially offensive to people within the company? And they're used by companies like Twitter and Facebook? Is this a post that violates our terms of service either in terms of showing nudity or in terms of threatening someone's life or whichever criteria they have? The techniques behind pretty much all of these convolutional neural nets for images and advanced NLP methods which we'll cover next week for a text. The hard part is not just the technological piece but defining what should be censored or not censored. Context matters a lot. If some politician says, lock her up, that has different implications in the U.S. or it's unlikely to happen or Myanmar where it has happened. So interpreting what something means depends a lot about who's saying it when and even questions of what's reasonable nudity causes lots of problems for companies like Facebook which want to both protect people but don't want to block for example sites trying to explain the problems of breast cancer which may involve showing nudity. So this week there's a number of readings for discussions. Please read the first one, start discussing in your pods, then the rest of them will be left for the homework, give you some more chance to think more in-depth about what you would do, what you will do if and when you end up having to decide what is hate speech or speech that should be flagged or blocked.