 Our next speaker is Amit Dutta, who has given us the title Transparency in Online Targeting. A couple of years back, a man furiously walked into a target store of Saint-Miniapolis and demanded to see the manager. He said, my daughter is still in high school and you're selling her few pots for baby clothes and ribs. Are you trying to encourage her to get pregnant? The manager profusely apologized and said, I'm sorry. And then he called a few days later to apologize again. On the phone though, the father was somewhat abashed. I owe you an apology, he said. My daughter is due in August. We are now living in a world where the targets and Facebook's know more about you than your parents do. Such accurate predictions are possible by carefully processing tons and tons of data and creating accurate mathematical models of you. These models are then used to make predictions about your lies, your dislikes and more importantly what products you might want or might be persuaded to want. The ads that you see are very different from the ads that your neighbors use. This is because these ads depend on the websites that you visit, the views that you've seen, the ads that you've clicked in the past and God knows what else. The algorithms that make these predictions are often so complicated that even the programmers who write them can't always comment on why a particular prediction was made. Moreover, they don't want to open up their source code to public auditors in order to maintain their competitive advantage over business writers. My thesis work aims at increasing the transparency into these predictions without requiring access to the source codes or the algorithms that exist in the first place. I have a mechanism which can detect the flow of information from inputs to outputs for any given recommender system. By learning simple experiments from the consumer side, I can tell which inputs influenced a particular recommendation. I have already implemented this technique to study Google's advertising system and the recommendations that it makes. I found some pretty interesting results. For example, I found that visiting carbonated websites increases the number of carbonated ads that Google serves for you. This sounds marvelous, maybe even useful to some people. However, what I found interesting and also concerning is that the same behavior also happens with websites that are about sensitive topics, like those related to disabilities and substance abuse. I have made my scripts and the code based open source so that any privacy conscious user can run their own experiments and detect information flow in instances which interest them. Thus in conclusion, I have devised a mechanism which increases the transparency in complex data processing and recommender systems by learning simple experiments from the consumer side. Thank you.