 Great. Thank you very much. A big thanks to Rachel for organizing this. It's great to be back in Helsinki. It's a long story about how and why I'm presenting this particular paper. I'll spare you the details. But this is very much work in progress. So sort of pardon the sort of rough edges. I'm working with Shehryar Banuri, who's an experimental economist at the University of East Anglia. So everything that's wrong with the experiment is this fault. So we all know that gender inequality, inequality, exists. And there seems to be a lot of evidence that suggests that it's persisted despite legislation such as discrimination laws. There's a huge literature on discrimination. And there's an equally huge literature that suggests that there are these substantial differences in how men and women operate in the labor market that could also contribute to this inequality. Where we would like to place ourselves is this emerging literature that suggests that discrimination has moved on from being very explicit and overt to subtle discrimination. When we started this work, we were not aware of the literature on microaggressions. And we are beginning to sort of find that out. But so far, the only paper that comes close to what we're trying to do is Bassford, Offerman and Berend in social psychology. And what they are trying to do is bring to present vignettes of discriminatory behavior and then trying to figure out if people actually believe that this is discrimination or not. And what they're trying to show is that it takes a certain degree of explicitness before people start saying this is discrimination. What we are trying to do is sort of start off with this idea that valuing certain colleagues less. So valuing input from colleagues of color less and valuing women's advice, for example, or input in the labor market less is a form of subtle discrimination that could potentially explain the persistence of wage differentials and the glass ceiling as well. So how do you set this up in a lab experiment? So we are thinking of a work interaction as a difficult question that needs to be answered by, say, a team. And nobody really knows the answer of this difficult question. So people are sort of advising each other. And the question is, whose advice do you take? Do you value a man's advice as much as you would value a woman's advice? And we present this trivia task, and I'll explain this a bit more, to students at the university that I work in, which is the Lahore University of Management Sciences. It's a very prestigious elite university in Pakistan. Most of our students are rich. A lot of them are bright, and some of them are both. Either way, they're going to end up in pretty influential places once they graduate. And the medium of instruction is English. If you come into lumps, you will not recognize it as a part of Pakistan. It's much more westernized in terms of dress. So in some sense, we expect this to be, if there is discrimination and prejudice, we expect this to be sort of a floor to it, rather than a floor or ceiling, one of the two. Right. So the trivia task is incentivized. We give a difficult question. And the closer you are to the answer, the more money that you make. So you'll make 1,000 tokens if your response is 10% of the correct answer, 200 tokens if the response is within 50%, and zero if you're out of the 50% range. And then the tokens are converted into money. People could make anywhere between 250 rupees, which is about $2.5 to 2,500 rupees, about $25. And the average earning is about $7.5, which is a nice little lunch at a nice place. That's the kind of money we are making. These are some of the, well, all of the trivia questions that are assigned randomly. My favorite one is at the top. What is the maximum heart rate per minute of a hummingbird? How many eggs does an average hen lay in a year? And so a lot of thought and fun went into selecting these questions. What we had in mind were questions that are bizarre in the sense that nobody reasonable could actually know the answer. The heart rate per minute of a hummingbird is about 1,200, active, 250, resting. And so it's difficult question. Well, some are easy, but most of them are difficult. They have all numerical answers. So we can calculate distances from the correct answer and so on and so forth. And you wouldn't really expect a man or a woman to be better at this. How many eggs does an average hen lay in a year, unless you're from a farm? But that could go for either a man or a woman, right? So these are the questions. And what we do is we give, is something missing? Oh, there's one of the slides quite missing. So what we do is we have a trivia task where we give 10 questions to the students. The 10 questions are selected randomly from the pool of 40 questions that you saw earlier. So this particular question is what number is the answer to the ultimate question of life, the universe, and everything in the Hitchhiker's Guide to the Galaxy? You answer 10 questions in a row, and you're reminded that this is how you will make money on this quiz. So we get them to answer these 10 questions drawn randomly. In the advisor task, which comes next, we give them the same questions again. But this time, and there are lots of instructions that are associated with this task, we say, well, there are a certain number of students, well, actually 25 men and 25 women, who did this task before you guys. And these are the answers that they chose. So would you like to revise your answer in the light of this advice? And again, the closer you are to the correct answer, the more money that you would make. And the outcome variable, which is what we are interested in is whether the subjects change their answer in response to this advice, who do they value? And the treatment is, of course, varies the information about the advisor. I'll talk about it in a second. But regardless of the treatment, what we do is that every subject is matched with five male advice, instances of male advice, and five instances of female advice. So everyone, regardless of whether they know who their advisor is, is matched with both men and women. This is how the advisor task looks like. So it's the same question, what number is the answer to the ultimate question of life? This is your answer, what you answered. Here is your advisor. Her name is Katrina. Her GPA is 3.2. Her answer is 10. Would you like to revise your answer? Now in this, what is built in is the gender of the advisor as well as the GPA. So this is the gender and GPA treatment. And then the gender-only treatment would be Katrina. The GPA-only treatment would be just 3.2 and so on and so forth. Sorry. So the advisors were asked, so when we did 25 male and 25 women advisors, they were asked to choose a name of their liking. And these names are the ones that they were chosen. And the advice is actually from the person who chose Katrina. Now, except for one person, one man who called himself Shanaz, everybody else chose the same name as their gender. There were some men who would choose names like Hercule Poirot and Spider-Man. And so we did a couple of robustness checks to see if that was having an effect, but yeah. So initially, our original design was based on having in the control treatment. So right now, in the control treatment, we have nothing here. And all we say is, here's an answer. Would you like to revise it? Originally, what we had was a description from the advisor. So we'd asked advisors to say, just describe yourself in two sentences. So you could say, I love baking pies on Sundays. And that would be a description. The idea was, then it would be easier for us to say Katrina, who loves baking pies on Sundays, gave you this advice. The problem was, they started predicting gender from the description. So we lost 200 observations. And we had to sort of remove the description. They're still trying to figure out if we can incorporate that data, because it's just huge and beautiful data. Anyways, so if you include the treatments with descriptions, we have 393 subjects, 10 questions. So that's 39, 30 observations. But what I'm going to present today is all the treatments without descriptions, which is 216. Then we were interested in figuring out if prejudice, the actual action of prejudice, is correlated within some measure of sexism. And I was very interested in something called ambivalent sexism inventory, which is highly cited in the social psychology literature. And it distinguishes between hostile sexism, which is agreeing with statements such as women seek to gain power by gaining control over men, as opposed to benevolent sexism, which is most women have a quality of purity that a few women possess. So they beware the knight in shining armor. But we couldn't actually draw that distinction. So I'll just quickly go to benevolent. So this is how our scores look like. 2.7 out of a maximum of five for men, hostile sexism, 2.9 for benevolent sexism, much lower for girls. This is how we compare across the world. We are actually better than Italy, which is nice. But on hostile sexism, but Italy does much better on benevolent sexism. So I'll just take you, yeah. One final thing, the sexism scores are continuous scores. But just for convenience, I've just made dummies out of them. So if you have a higher than a median score, you are a sexist and not otherwise. So this is what happens in the control treatment. In the control treatment, when we give you an advice from males, and people don't actually know it's coming from males, they change their answer about 59% of the times. When they're given advice from females, but they don't know it's coming from a female, they change their advice 65% of the times. Now, this is just pure just data. So this is essentially saying the answers that girls are giving are somehow much more amenable to change than that for men. We don't quite know why this happens. As soon as we give them gender information, the 59% drops to 56. So it's a drop for both male advice and female advice. But the drop for girls is much higher. It's a 15 percentage point difference. If we take a difference in difference to see what the impact or the prejudice is, it's about 6 and 612 percentage points change against women as soon as students find out the gender of the advisor. And this is very significant at 5%. So all of these graphs, by the way, have a difference in difference regression at the back of them. These graphs are just simple OLS without controls because then we can just map out these bar charts. But then there is another set of regressions with clustered standard errors and question-specific fixed effects and so on. All right. In terms of men and women, so the difference in difference for men is much higher at about 16 percentage points. For women, it's lower at 5 percentage points. That for men is significant at 10%. That for women is insignificant at 10%. When we further segregate by sexism, non-sexism, what we get is that the difference in difference for sexism is 9 and 5, 14 percentage points here. This is very significant. The difference in difference for non-sexists actually much larger because of this particular jump. They're actually listening a lot more to women in the control treatment. But this turns out to be statistically insignificant. With women, it's all insignificant. So the summary is knowing the gender of the advisor results in prejudice against women of 12 percentage points. Sorry. This occurs for men at 16, women at 5 percentage points. But it's insignificant. And then we get a result that's consistent with sexism as well. Now, the final thing is that we gave them so the gender and GPA information compared to the gender only information is essentially saying we've given you some indicator of the merit of your advisor. Would you like to take that into account? You can see that qualitatively, the thing is improving. The initial difference is 6 percentage points. In gender and GPA, it's about 2 percentage points. So this situation is improving, but this is statistically insignificant. And actually, no matter what we do, this remains insignificant. So this is treatment by just men. And this is interesting in the sense that in the gender and GPA treatment, men are treating women and men equally. But statistically, it's just meaningless. And I'll just stop at this. Both men and women undervalue women's voices. This is the bit of a question mark because there's significance at like 13% points. So trying to force it a little bit. Prejudice as measured by the ambivalence sexism inventory appears to be an important correlate of discrimination. Providing information on merit does not increase valuation. And I would really like to draw this result. I'm not sure I'd like to comment on this. Our results appear more in line with the prejudice-based theory than an informational asymmetry theory based on discrimination. Thank you very much.