 We're going to mainly talk about the one that is relevant to payments and whether our lending processes or our fraud detection processes are going to be unfair to some groups in society. So this is a pretty hot topic these days as people are starting to realize that algorithms don't always do what you want them to do. So we'll sort of discuss a little bit about how that happens. So about a week before this talk I discovered the simplest algorithm that can do something bad. So this is not actually in payments, this algorithm is used to prevent shoplifting in a supermarket. So what you do is you go to your point of sale system and what you do is you calculate the shrinkage rate. So the shrinkage rate is how many items you purchased to have in the store minus you basically look at how many of these items you sold, you count how many are left in the store and if there's a difference that means someone stole some of them. So you make a spreadsheet, you put the skew, so this is a unique number that represents a particular item. You look at the shrinkage rate, you look at the price of the items and then here's a very simple AI, you multiply the shrinkage and the price, is this not working very well? Then what you do is this third column, you times C, represents how much money you're losing to theft on that particular item. So you sort this spreadsheet by price, by this amount you're losing and then what you do is you look at the top items. So the ones I've marked in red have the highest rate of loss and what you do is you put anti-theft devices. So these are those little boxes you see in Indian supermarkets, they're usually on chocolate and if you were to try to take it out of the store without taking it out of the box it will flag an alarm and you need the shopkeeper to take the box off. So this is the simplest AI we can possibly understand and everyone here understands what it's doing. Can this AI behave in a racist or a sexist manner that will make people unhappy and the answer is yes. So this is a real example that happened in the US. Retailer Walmart is being sued for it. If you are a white person, there is no anti-theft devices on your hair dye. And if you're black, it's in an anti-theft device. A lot of people find this fairly offensive and Walmart certainly has egg on their face from this example. And all this happened just from literally taking a spreadsheet and basically sorting by the highest rates of theft. So there's a lot of reasons to think this is bad. First of all, it offends a lot of people. Most of your black customers have no plans to steal whatsoever. But they still have to deal with the hassle of going to the cashier, getting them to take this anti-theft device off. It slows them down in the checkout. And it also perpetuates racist stereotypes. And in part, it does so because they have an element of truth. According to the point of sale system, this was one of the most stolen items. But on the flip side, let's think about why we might actually want to do this anyway. So when items are stolen, it costs the store money and they have to make up the money somehow. They make it up by raising prices on either the item that's being stolen, they can raise prices on all the items. And the cost of theft has to be borne by customers somehow. Another thing a store might do is to stop carrying items that are stolen a lot. This can also be very bad. So my ex-girlfriend, she's born in Mozambique, but she lives in Pune. So every time I come from the US, she asks me, Chris, can you bring me some shampoo for black people? Just because in Pune, she can't buy it. And it's very annoying to her. So if the store stopped carrying the product you want because they're stolen a lot, that's also bad. And then similarly, these anti-theft devices are limited. So another option is to put them on everything. But now you're inconveniencing 100% of people instead of a much smaller fraction of people. So this is the fundamental, just one example kind of illustrates the fundamental takeaway I want everyone to get out of this talk. Which is that there's a lot of relevant ethical principles here and you cannot actually satisfy all of them. In most cases, there's no way to simultaneously be utilitarian and also to avoid stereotypes and to treat every group fairly and also every individual fairly. So to begin with, like I said, this talk is going to be both math and computer science as well as a bit of philosophy. So we're going to discuss the four major ethical theories that I see as playing a role here. And one thing I want to emphasize is that most papers you read about AI ethics are written by someone in San Francisco. And I interpret the San Francisco broadly. It might mean Seattle. It basically means tech companies on the American West Coast, Microsoft, Google, Facebook, and then a few universities, mostly Stanford. So the ethical theories they have are very much informed by that context and are quite likely not super relevant to India. So what I really want to encourage everyone to do is both understand where they are coming from if you're interested in this topic. And also ask whether the algorithms and the processes they've come up with really apply to India. Maybe we need, if India has a different ethical system than San Francisco does, then we would need different algorithms as well. So the first virtue, so the first sort of ethical principle is individual fairness. And what this means is that there's certain traits that we consider protected. So gender is typically one, in India caste would probably be one, in the United States race would be one. There's a variety of traits that you feel it's unfair to condition a decision on. So basically, here's what shouldn't happen. We're sort of deciding let's say state of origin in India is one of these traits. I should never say, you're from UP, I will not lend to you. But if you were from Maharashtra, I would lend to you. Individual fairness says that should not happen. And you have to pre-specify these traits, but whatever they are, it's a thing you need to consider. Now, another more San Francisco idea is group fairness. So remember in individual ethics we said, I should not make a decision differently depending on let's say your caste. Whereas in San Francisco ethics, you sort of say, you look at each individual caste and you say some statistical property should be the same across these groups. Perhaps my loan approval rate should be the same for upper caste and for lower caste or perhaps the false positive or the false negative rate should be the same. So fundamentally when one of these statistical properties is not equal it's called an allotate of harm. It might mean you're making loans predominantly to people from, let's say a higher caste than to people from a lower caste. Now one thing that becomes really important if you are considering group fairness is what is the definition of a group? So that's the question on the board. What is a moron? I've not seen a clear definition and we all have a general idea but there's no clear definition and I've listed a bunch of edge cases of people who you could kind of consider one and might not be. Depending on how you choose the definition that would affect these statistical traits because you're either including some people or excluding some people from your class and that could then fundamentally change whether something is fair to a group or not. Third San Francisco virtue is not noticing things that are, I use the term problematic because it's popular in the U.S. And again I'm really focusing on the U.S. because that's where most of the work on this field is done. So certain things, if you're a good person, you will just not notice certain facts. So can everyone read, this is a screenshot of Google. I don't, can you read this in the back? Okay, so this is a screenshot of Google's auto complete. And I typed into Google, why are panels so? And it gave me quite a few things that people are frequently searching for. I've been to Singapore, that's a good question. There are a lot. And then you can, there are many things that are true, that are listed for Punjabis and having spent a fair amount of time in Delhi, I can, they don't seem crazily wrong. This is American Google. They're, if you type in why are black so, why are Hispanic so, why are white so, there are no auto complete suggestions. There's not even, there's a lot of search results you just won't get. So in San Francisco ethics, there are probably things you could observe, you would observe and you might want to be searching for, but Google is not going to help you search for those things. You're just not supposed to notice this. This is a quote from Ur Khoslai, he's a senior VP at Google. He basically says, as engineers we're trained to pay attention to details, think logically, on topics of fairness, this is bad because questioning exact details might actually question the overall narrative. He's basically saying, I mean it's a long quote, but he's basically saying, thinking logically and questioning on topics of fairness and justice is bad because it might lead you to think in a bad place. And here's a really concrete example from the field. There's a paper that studies a word embeddings. And what it does is it observes that certain words like computer programmer have gender associated with the word. This is just in terms of vectors and a word embedding. And what it does is it discovers there's a almost perfect correlation between the gender content of the word and the number of people of that gender who work in that profession. So you can, so an AI trained simply on reading words can suddenly discover lots of computer programmers are male and lots of nurses are female. This is considered a problem that needs to be solved. The AI has noticed something it should not have noticed. So the ethical principle is don't notice these things. And the final ethical principle that I'll discuss in this talk is utilitarianism. And what this is about is just your product is useful. Let's say you're lending, which is mostly what I'm going to discuss. If I lend money to someone who repays it, I get the money back. I lend it out again. The person who repaid it spent it on something useful. So it was beneficial to him. And this process is useful for society. Essentially, we're allocating capital to productive uses. And if I give it to someone I've named Freddie the fraudster who just spends it all on ganja and then doesn't repay his loan, that's not so good for society. So the assumption here is that whatever we're doing, stopping fraud, lending money to people who will repay it, this is productive. So the fundamental assumption here is that your product, who you're choosing to either allow people to use or reject, is a valuable. If your product is not valuable, get a new job. But if it is valuable, then obviously it's good to allow people in and it's bad to reject them. And then we're also assuming that your product should be doing its thing. We're assuming capitalism works in the sense that lending should be about who's going to repay. And if you want to give money to the poor, that should be done separately by the government or by an NGO. And it shouldn't be secretly funneled through a lender, let's say. Like it should be funneled to people who need it by the government as opposed to people who are good at fraud by a lender. And if you believe that, then utilitarianism makes sense. The key questions when you're considering ethical issues that have to be considered are, how much utility are you going to sacrifice for one of these other virtues? So how much utility will you give up for fairness? How many extra defaults or extra fraud will you allow in order to avoid using a protected trade? Or how much individual fairness will you sacrifice in order to get more group fairness? These ethical principles will come into conflict. And I'll illustrate how shortly. The key point is, you have to answer these tough philosophical questions before you can even start building an algorithm that's fair. Because you have to define what fair is, and you have to define how much of one thing you'll give up for the other thing. Let's talk about what AI and machine learning systems actually do. So there's no strict AI neural networks in this talk. I'm sticking to the simplest possible algorithms so that we can all understand them. The one algorithm was sorting a spreadsheet. The next one I'm going to discuss is linear regression. I'll get to that in a second. So the key point here is to build simulated worlds. So I know what the world looks like, because I built it in some Python code. And then to see, I'll take my algorithm, I'll run that algorithm on the world, on the fake world, and I'll see what the algorithm outputs. So that helps us understand, if the world looks this way, will the algorithm behave in a manner we find acceptable or unacceptable? So the simplest model, we want to predict something. We have an input, which is a set of things we know before we issue a loan, before we do a fraud check, and so on. So in this example, it might be income in North India, which would be one if a person is in North India, and zero if they're not. Mobile or desktop, which would be, again, one if they're on mobile, zero if they're on desktop. How much money they spent in the previous month. The thing I might wanna predict would be how much they're gonna spend in the current month. So given this information, I wanna predict this. So there's an algorithm called linear regression, which basically says, I know that y is gonna be the dot product of some vector alpha, which I don't know, and x, plus a constant term, plus error, which I'm gonna assume doesn't matter for my purposes. So concretely, what this means is, in case for those of you who are unfamiliar with dot product, alpha is a vector of numbers. There's alpha one, alpha zero, alpha one, alpha two. It's the first alpha times income, plus the second alpha times whether they're in North India, plus the third alpha times mobile or desktop, and so on. You just do that multiplication, add the results together, and that's your prediction of y. It's the simplest predictor, but it's really understandable, which is why I'm going with it in this talk, and actually in production, you should always start with this because it really very often works a lot better than you would expect it to. So let's look at how this works. I've got a simple model where I'm assuming the true alpha is one, two, and three. I'm generating my data by taking random input, and I'm setting the output equal to one times the first, plus two times the second, plus three times the third, plus other errors. So it's not, the data doesn't fit this perfectly, but it comes close. ESQ is a Python function called least, that stands for least squares, and it basically says, given the data, it doesn't know alpha anymore, it only knows the data and the output. Estimate me an alpha, and when you run this, it spits out something that is pretty close. I mean 0.98 instead of one, 2.003 instead of two. It's not perfect, but it's pretty good. And the reason it's working so well is, of course, every example in this talk is gonna be linear regression because that's easy to explain. So now the first question to ask ourselves is, is linear regression gonna become biased or unfair in the sense that it's gonna systematically make wrong predictions based on information it should not be? So, I mean concretely, we sort of have a mental model of a racist human. This guy just doesn't like Bihari's for whatever reason. So, this human is sort of evaluating a loan application, and if someone from Karnataka comes along, he's like, okay, I'll look at your income, I'll look at your financials, I'll look at your debt to income ratio, and I'll make a lending decision. And then if a Bihari walks in, he's like, ah, go away, I don't like you. This is really not a nice person, but this is sort of our mental model of how a human behaves. The question is, what will an algorithm do? So, what I'm gonna do here is, I'm gonna make the third feature, whether a person is Bihari, or choose your favorite protected class, something we're not supposed to look at, and what I'll do is I'll just make 25% of the people be, let's say, Bihari. And what I'm gonna do is I'm gonna make the true model be not dependent on, so I'm gonna assume that Biharis pay their loans back the same as that of anyone else, just dependent on the other two features, which are their financials. So, you can see that when I generate the output, I'm ignoring, like in the simulation, I'm ignoring whether a person is Bihari. Then when I run the model, the model rediscovers exactly what we put in, that it doesn't matter if you're from Bihar, all that matters is your income and how much your house is worth. So, what we've done in this case is we've discovered that simple linear regression will not behave like this racist bank manager, we imagine, who is gonna behave unfairly. And then, essentially what it's discovering is that everybody lives along this line. We've got blue dots, and I guess you can't really see it too well, we've got blue dots which represent the majority group, red dots which represent the minority group, and they're all clustered along this same line. So, this line is our predictor, saying the higher the input is, the higher the output is. According to our ethical principles, this algorithm is completely ethical. Reds and blues are gonna be equally represented in the positive set, let's say, being issued alone. Any individual red or blue person is gonna be treated the same because if we go back a couple of slides, that number is so close to zero it won't really change your decision. It's utilitarian because we're accurately predicting reality. If not noticing anything we shouldn't notice because in this case there is nothing to notice. There is nothing, Bihari is just an arbitrary label, it has no impact on the data. Now, a lot of people say things that are kind of anthropomorphic about what AI will do. Like, here's one that was in the New York Times a few months ago. This was discussing using AI to predict crime. The police have discriminated in the past. Predictive technology just reinforces and perpetuates this problem. And it's basically what they're saying is if racism happened in the past, the AI is just gonna keep doing what happened in the past. So, let's see if that's true. So, what I'm gonna do is I'm gonna simulate a world in which the past data looks bad. And then I'm gonna see what the predictor does when it comes out. So, what I'm doing here is again, I'm gonna say 25% of people are in the protected class. Let's say, I don't know, choose a different group. Gujaratis. What I'm gonna do is for people who are Gujarati, I'm gonna say their income is lower, their house is worth less money. So, this minus two is a lower number than zero, which is what everybody else gets. And then the output is predicted as before. So, if you do some descriptive statistics, you discover that almost every Gujarati has lower income, his house is worth less, than almost every non-Gujarati. So, this is far more stark a difference than what you see in the real world. The non-Gujaratis are up here, they perform well, they have good input, and they have good output. And the Gujaratis are down here, they're performing badly. So, as a human heuristic, you could easily just say, you know what, I'm not gonna lend to any Gujaratis because they're all down here, they're all bad. Linear regression doesn't do that. Linear regression observed that, actually the protected class still doesn't matter. It discovers that the underlying reason why Gujaratis were performing worse here is just because their income was lower. And it predicts that if a Gujarati had high income, he would also repay his loans. It's actually discovered that these predictive fatters matter and this other information does not. This is still allocatively unfair because the algorithm is going to correctly predict Gujaratis are not gonna repay their loans, just given that they all have low income and it won't lend much money to them. So, from the perspective of group fairness, this is not good. But it's individually fair, it's utilitarian, and it's also noticing something that is a bit problematic. Another thing that's often said is, so ultimately what the kind of things that are often said about AI is that it's gonna learn human biases. And like one thing it might learn is that, let's say the input data is actually biased. So, imagine again the situation where people come into the bank, they give you income, they give you other financial data. And imagine now that the bank manager, rather than just being like, yeah, I'll get out of here, he puts it all on a form, but he's like, if you're Bihari, I think you're lying about your income. And he's like, okay, I'm gonna just not, but essentially what he does, he biases the input. He subtracts, I don't know, 30 K a month from your salary, only if you're Bihari. Now the input data is fundamentally biased. Follows the line, Bihari follow a different line, but other than that the same pattern exists there. And if we run linear regression, what we discover is actually the algorithm rather than perpetuating this bias, it corrects it. So it discovers that if the input data is biased in this way, it subtracts errors. What the algorithm is gonna do is they're gonna put them back because it's only desire is to predict whether a person repays a loan. It doesn't care how it gets there. And if the most efficient way to do that is to correct the bias in the input, it's gonna do that because it's not like a human. It's very different from a human. It only cares about accuracy. And we can observe that, I checked the residual, which is a measurement of the error and the algorithm which adds this bias back performs a lot better. On the other hand, it's a bit unfair because what the algorithm is actually doing is it's saying, if you're Bihari, I'm gonna give you two extra points on this scoring system. Just because I've observed your ethnicity, I'm gonna give you these points. That's not so fair on an individual level. This gives us a recipe for checking whether an algorithm is kind of biased in some way. What you can do is you can make a new data set which has the output of your old algorithm and you wanna test and the protected class. You build a new algorithm by taking this as training data and if protected class changes the output, that'll actually tell you the old algorithm was biased. And if it doesn't actually change the result, it means it was probably not. So fundamentally, things like bias in your inputs and whatever, these are hidden features that predict your data and machine learning is about finding those kinds of patterns. This again is allocatively fair now because essentially Biharis and everybody else are gonna be predicted to, are gonna be issued loans at the same rate but it's individually unfair and it's a bit complex whether it's satisfying or virtue of not noticing. So here's a real world example where this was done and this is where I've taken the title from. So some people were studying microcredit in Bangladesh and they did exactly what I'm describing here. They ran linear regression on lenders and looked at various characteristics of the lenders. So they looked at whether they were Kasi or Patro which are the two major ethnicities in the area, education, married, whether they're a farmer and one thing they discovered was highly predictive is that if you're female, you're much more likely to repay a loan and this result has actually been replicated quite a few times. Here's another study that looked at lenders instead of borrowers and discovered that lender with more female clients is gonna have a much better portfolio. This has been done in the US as well in a couple of cases. So this was a study that discovered that religious people are much less likely to repay a loan than non-religious people. So people would submit a loan application and if there were religious words like, by God, I will repay you. The person was actually very much unlikely to repay. Another thing it discovered is that medical words like angioplasty were also very much correlated with not repaying possibly because the person dies and then they don't get to repay. And religion is also another thing or generally not. In the US it's considered a protected class. Here's another example where in terms of repaying a home loan in the US, black Americans are discovered to be significantly less likely than Asian Americans to repay a loan. And Asians is just a broad cash-all category that includes Indians, Chinese, and Filipinos, and everyone. Perhaps should be more granular, but it's not. This one's a little more uncomfortable. So if we think about this example, a lot of us are gonna, our instinct is yeah, we should maybe offer ladies lower rate loans because the algorithm said so and it kind of seems okay. In the religious example, we're a little more uncomfortable and in the racial example, this means we should be offering blacks higher rate loans. Just again, if we're following the data. These all, people tend to react very differently depending on which of these examples they're considering and it's not clear to me what one should do about this. So the issue of noticing, the virtue of noticing is a bit complex in the following sense. Maybe there's bias in the input data. So there's other predictors here like age and education. Maybe we're mismeasuring age and education for women. Maybe women just don't report that they went to school and dropped out in the 10th grade. So maybe we're correcting a bias like that. Also, maybe we are observing something intrinsic. Maybe women are just naturally more responsible than men are and they're just naturally going to pay back loans at a higher rate than men. So we might be observing one of these things that we're not supposed to observe according to this ethical principle. I don't know the answer to this. It's hard to determine whether this is an intrinsic thing or a bias in the input data. But nevertheless, you've run this algorithm, you have a prediction of whether someone's gonna repay. You gotta do something with this. You have to make a decision. And the question is, what do you do? One key point is that protected class is another feature. So here's another quote about how AI is gonna behave badly. This is from Kathy O'Neill who's been a huge critic of using algorithms to do anything. She says, if we allow the statistical models to use for college admissions in 1870, we still have 0.7% of women going to college. So her intuitive idea is that the algorithm is gonna sort of observe that everyone going to college is male. So male is an important trait in college. Now let's imagine, so our AI just treats these as columns. They're, it's a boolean true or false. So let's swap out the meaning of the column and make the same state. But simple, we have for a while, at the Mato and Book My Show, we're a couple of our customers, our merchants. So you can make payments there. We had no training data whatsoever on grofer's who's still a very new merchant who uses us. If we allowed a model to be used for credit of protocols on the Mato and Book My Show, we would still today have 0% of grofer's customers using simple. That's kind of a silly statement to make, right? It essentially says you can never, if you're using a statistical model, you can never onboard a new merchant. If I'm not having too much time, I'm gonna gloss over the math that kind of illustrates why this just doesn't happen. What I'll do instead is I'll just illustrate a couple of examples where the exact opposite happened. So, historically, in the Maharashtra area, most auto wallas were Maratis. It's just at the time, almost everybody there was. Now, relatively recently, Uber and Ola started to exist. And they used an algorithm to predict who should be an auto walla, who should drive an Uber, who should drive an Ola. It was basically like, look at your predicted star rating, if it drops below 4.3, you get kicked off the network. The net result of this is that suddenly there were a lot of Biharis driving autos in Maharashtra. The exact opposite of what Caffeoneel predicts that it should reproduce what happened in the past. Instead, it discovers, it has discovered that what anyone can drive an auto, all that matters is how good a job they do, whether they get where they're going. And the net result is that people who dislike algorithms are these shit-sana guys who want to both stop Uber and actually pass laws saying only Maratis can drive an auto. And then a similar thing happened in the U.S. in college, actually, around the time she was talking about. I guess she was just not aware of this example. Colleges started using a model for admissions in 1908. And it was trained mostly on, so this is a link you can click to see where it comes from, basically white Christian men who were also rich. When this model was put into action, the number of Jews, the Jews are a religious minority in the U.S., about 1% of the population. They suddenly skyrocketed in college because for various cultural reasons, Jews study really hard. Like studying the religious text is a major part of the culture. So at this point, what the colleges did is they dropped the model and instead put humans back in the loop. At the time this happened, again, keep in mind this is 1922, nobody liked Jews, so the president of Harvard called this a crisis. He dropped the model and then after they went back to having humans do it, they got the number of Jews back down quite a bit, which in my mind is not particularly fair, but at that time, their ethics said you should not have so many Jews. So fundamentally, like a lot of what you read about AI is, in a sense it relates to this quote, it relates to George Bernard Shaw. So he was in Ireland and in Ireland there were a lot of religious conflicts between Catholics and Protestants who were two slightly different varieties of Christians. An old man asked him once, can you tell me, are you Catholic or Protestant? George Bernard Shaw was like, I'm an atheist, I don't believe in any God. And then the old man who lives in Ireland asked him, but is it the Catholic or the Protestant God you don't believe? This old man had probably never left Ireland. He didn't know anything of the world other than Catholics versus Protestants. So what he had to do to make sense of this atheist was to just bring it back to this world of Catholics versus Protestants. A lot of people talking about AI, they don't understand how a random forest works. They try to bring everything back to how would a human behave? Humans are born with a lot of intrinsic, like we've evolved to be tribal creatures. We like the people who look the same as us. We dislike people who look different. Whereas a random forest or just a spreadsheet that sorts by a column doesn't have these same issues. So here's the unpleasant trade-off and I found one paper that has two graphs that illustrate it very nicely. What this graph illustrates is the percentage of people of different groups that have a certain FICO score. And so in the US, FICO is a risk score. The higher it is, the less likely you are to default on law. So imagine we want to choose a fixed cut-off and we want to, we basically want to say we're not going to lend to anyone below this cut-off. If you were to choose a FICO of 600, which is just relatively arbitrary because there's a tick on the graph there, you would be rejecting 75% of blacks and 25% of Asians. Asians. And then about 30% of whites would be rejected and something like 50% of Hispanics would be. So if you choose one cut-off for everyone, you will be individually fair, but you'll be violating the principle of group fairness. And if you choose different cut-offs, what you could do is you could choose a cut-off of 600 for Asians and then slide over to identify the point on the black people graph where it matches and has the same number of rejections, which is about four, 10, four, 20, based on eyeballing this. This is now individually unfair, but it will give you the group fairness. And you can't have both. The other graph shows the default rate as a function of FICO score. So the x-axis is FICO score and the y-axis is the non-default rate. So we discover at a FICO score of 600, about 80% of Asians will repay a loan, whereas about 60% of blacks will repay a loan. So this shows that the FICO score is actually biased in favor of blacks. One thing we could do is we could charge both groups a 43% interest rate. And that'll just allow us to break even. So if we want to make a profit, we have to up it to 45, 46, something like that. This is individually fair, but it's non-utilitarian because there's one group that's predictably, just like blacks are in this example, they're gonna predictably get more money out than they put in. And the Asians are gonna be putting more in than they get out. We could charge them different interest rates, but now we're violating individual fairness. But we're being more utilitarian, like loans are being allocated more accurately. And it's also violating the virtue of not noticing because we've noticed something that makes a lot of people uncomfortable. Like this is something we wish wasn't true, but it does seem to be there in the data. Key point is there's no choice of cutoff, no choice of interest rate, which can satisfy all of our principles. Whatever we do, we're gonna have to make some unpleasant choice. So all we're left with from this analysis is some uncomfortable question. How many bad loans should we issue in the interest of fairness? How much individual fairness should we sacrifice? Like how much should I actively discriminate against one group in order to make the group treatment fair? I don't have any answers for you. In San Francisco, they tend to prefer two of these principles and they tend to ignore the other two. But the question is, but I don't know if that's right. So ultimately, like this talk sort of comes out of the fact that over here, I don't know what to do about fairness. I kind of see the trade off, but I don't know what Bangalore ethics are. And this is sort of something I'm hoping to start a discussion here. I'm hoping to find out what have people here help decide what Bangalore ethics are so that I can actually follow them.