 Our 10th and final presenter this afternoon is Amit Datta, his presentation discovering personal data use on the web. A few years back, a man furiously walked into a Target store and demanded to see the manager. My daughter is still in high school and is sending her coupons for baby clothes and cribs? Are you trying to encourage her to get pregnant? The manager profusely apologized and then called a few days later to apologize again. On the phone though, the father sounded a bit abashed. I owe you an apology, he said. My daughter is due in August. We are now living in a world where the targets in the Facebooks know more about us than our parents do. Target was simply using an algorithm to identify products that their customers might want to buy. Algorithms are increasingly being used to make decisions about users. For example, targeted advertisement, loan approvals and even law enforcement. However, these algorithms rarely come with explanations telling why a particular decision was made. So when asked, Target was not able to tell definitively why they were sending these coupons to that particular girl. Explanations are difficult to generate in our setting because the math behind these models is incredibly complex and often inaccessible. My thesis work aims at increasing the transparency into these predictions and that too in a black box model. That is, without requiring any kind of access to the source code or the models that these algorithms employ. I have developed methods that can find causal relationships between inputs and outputs for any black box prediction algorithm. To do this, I run randomized controlled experiments which is already an established gold standard in clinical trials. So, for example, when a scientist is looking to cure cancer, he first develops a drug and then randomly assigns the drug and a placebo treatment to two groups of identical cancer-afflicted bunnies. And finally, he runs a statistical test to see if the drug had a significant effect on the survival rates. We apply the same methods to study the Google ad system wherein we want to find out if web browsing activities has any effect on the ads that get served later on. The only difference is that instead of bunnies, we now have browser instances. Instead of drugs, we have website visits and instead of survival rates, we are measuring the ads. Turns out visiting websites about substance abuse significantly increases the number of ads that you get about rehabs, which is very concerning. We also found evidence suggestive of discrimination wherein simulated women received far fewer ads from a career coaching service promising high-paying, executive-level jobs than their male counterparts. This is an example of an algorithm propagating the existing gender pay gap. Thus, by running rigorously designed experiments, we can increase the transparency into these predictions. And given a meaningful way to experiment with algorithms, these methods are general enough to be applied to any prediction system, including the bonds the target was using. Thank you. Thank you, Amit. Thank you to all of our speakers this afternoon. We'll now take a short break. The judges will follow me upstairs and we will be back with our results in 10 to 15 minutes. In the meantime, please cast your ballots for the people's choice. Vote early. Vote once. And we'll gather your ballots and announce the winner of the people's choice award as well.