 Hello, everyone. Today, we will discuss computable philosophy, a proposal Lengy and Hong and me made out of our book. So in this proposal, there are tricky ideas is that computing judgments and information are more interlinked than people think of them. The second proposal is that computing and making judgments by programming and setting rules does not scale. So we need to complement them with learning, with inference from observations. So that is like first axis is computation, judgment and information. The second axis is learning how to infer laws and rules from observations. And the third axis is probabilistic thinking, which we'll see is inevitable if we want to do correct inference and robust inference. Those three axes are well-studied within computer science and information science. What we believe is that there, as we discussed in the video on what I was on paper, is that there are more philosophical aspects to be discussed. In particular, when it comes to questions like AI ethics or law itself, if we look at it from an algorithmic perspective and even the scientific method from an algorithmic perspective, there are many illustrations we have in the book for why these three tools could help us discuss aspects like moral philosophy, the value alignment problem we want to align the objective function of an AI with human preferences, et cetera. The side effects of algorithmic decision-making, the good heart slows, what happens, what goes wrong when we maximize a metric and preference and volition learning how to learn not only what people prefer, but what they would want to prefer if they had more time and information. Social choice theory, which was researched by game theory and economics in the past century, but that is increasingly important in algorithmic decision-making and aggregation of preferences between many users, adversarial computing, decentralization, also as an aspect that needs a new toolbox of probabilistic thinking, learning and how to look at computation as a form of judgment and finally like all the questions that are relevant for AI safety, like reward, hacking, credibility, et cetera. We will not discuss all of these, so we'll discuss just the three key aspects of computation, judgment, information, learning and probabilistic thinking and in the end we'll illustrate how they could be useful just by discussing privacy and fairness which are two important questions in algorithmic decision-making. Yeah. Yeah, so this is I think a big program that we have in mind and I think it's very interesting. Like it's really an interesting insight that computer science can give us into what it means to provide a judgment, good judgment, reliable judgment and to command this with more philosophy and stuff like this. And in particular, maybe you can describe a bit by what you mean by computational in general but one important feature is like really this idea of step-by-step reasoning, like very clear steps and also if you explain ahead of time what all these, what the steps are going to be, then this can allow them to analyze the step-by-step procedure, the algorithm that will be used and you can analyze them in many, many respects and arguably also, this is something that has been extremely popular in the history of mankind and that really has changed a lot of the way we do a lot of things, maybe you want to detail this. Let's take a concrete case. It's like something people don't think as a moral issue. If you just type in a search engine, COVID-19 vaccine. So what came first as a result or like what came in the first 10 pages of results, what to show you in the first 10 pages of results is a decision that is algorithmically made with the search engine ranking algorithm but which entails enormous moral consequences because how you would notch 3 billion people about resisting or accepting vaccines for COVID would have consequences on human life. So this is a moral question, this is a philosophical question and it has a very short deadline, half a second, like few milliseconds, the search engine should answer it in a few milliseconds and we tend to think of this as a technical question. Oh yeah, we just show what's most relevant or we just show what's most people are discussing. Is this really what we want to show people in the pandemic? For example, if there's a minority of people initiating a conspiracy against vaccine, should we amplify it just because this minority of people are super active on the platform? Yeah, and also an interesting thing is that when we raise these sort of dilemmas, like I think a very common reaction to all of this is to say that if it's a very difficult dilemma, we tend to postpone the decision for this, we just say that it's a difficult dilemma and we need to be discussing this. And it's a good point that we cannot make a decision right now, but what we can do as of right now is try to think of how we're going to come up with a decision just like saying, we'll have to discuss it. It's not really an algorithm that has the right properties of coming up with eventually a decision and a good decision possible. When we have this dilemma, it's important not just to leave the question open like this, but to, and it's also important not to just take this decision like with our intuition right now, but instead like we should try to think of a future step-by-step procedure and algorithm and propose different algorithms that will eventually make a decision because we need to eventually make a decision especially for problems like results to search queries. Yeah, I think a good illustrative example of that is something we discussed previously concerning the Milchian Bandit problems and clinical trials. So here we are in a case where continuing as algorithms are right now is an ethical problem because all this decision by algorithms have ethical implications. And also the solution we implement to figure out how to do better. We'll have to be done. So we'll have to come up to a satisfactory answer as quickly as possible while certainly making modification to these algorithms. So it's very important to correctly decide what procedure we implement to reach this sort of ethical agreement for social networks. Interestingly, historically this has been, like for a long time you could imagine that this was difficult to think about these things because like an algorithm has to be described at least and to be explained to another. So like I guess back in the old days like it was more like transferred as traditions from one to another. But at some point like Menkant invented writing and the invention of writing completely revolutionized the game in the sense that people would be able to write down the algorithms to be followed. And these algorithms by now we call them the text of growth. So they are not like very rigorous algorithms as the ones you would tell your computer to execute. But they are like the beginning of this algorithmic approach to decision making in particular to judgments in the case of the law. What by having this algorithm written down you have several nice properties like one of them is that the same law can be applied to different settings. So you have the same law. So we sometimes call the procedure or fairness to apply the same algorithm to different people. But you also have other properties like for instance now that the law is written like the algorithm is written, you can analyze it, you can verify it. So you can say to the judge, well, wait a minute you did not judge me according to the algorithm. You can talk about this. You can also improve upon it. They can say, oh yeah, the current version of the law has this flow that this it makes this decision in this case. And like most people maybe think it's not good. So we can change the law. We can improve the algorithm. So these are all features of written laws and of algorithms that have been a major breakthrough in the history of judgment. So maybe the takeaway message from this part is that if someone tells you they don't want to be judged by an algorithm, ask them if they prefer to be judged by law or by the mood of a judge that does not tell them on which basis they judged. Like if the judge tells you like, you are guilty because I think you are guilty. And if a judge tells you, you are guilty because in the law of this country if you put your car in front of the police station for more than three hours and then there is an accident in the police station then you have to go to jail. So if this is an algorithm, if then you realize like most people prefer to be judged by an algorithm that is transparent stated public that you can know in advance ideally or that you are assumed to know in advance. So actually being judged by an algorithm is historically a progress that we made thousands of years ago. So just like now maybe the problem is being judged by algorithms that you can't read that are too big for you to read and to audit and to assess. That's the real problem. It's not being judged by an algorithm. It's being judged by an intractable by a non-readable, very long complex algorithm. Yeah. And this is already the case for the law as well. Like the tests of law have become like they are transparent in the sense that the text is fully written somewhere but they're not transparent in the sense that it's very hard to interpret the law correctly. And also just because it's long but also that because it's, you see the terms. And by the way, like even if you go to this example we like to give thousands of years ago in all cases you were judged by an algorithm. This algorithm is either the if else if you steal one cow you have to pay the equivalent money of one cow. So this is an algorithm. And then there is the other algorithm which we call the mood, mood of the crowd the mood of the judge. It's not transparent. It is an algorithm. It is a decision-making process. It's very chaotic. You can't predict it. You can't anticipate it. So you always prefer the short clear transparent algorithm if this, then this, if this, then this. So there's nothing, actually there's nothing new. We always preferred transparent clear stated algorithms to obscure chaotic, non-transparent algorithms. Yeah. Yeah. And also sometimes for certain algorithms you can prove properties. So for instance, if you take the Gale-Cher player algorithm that's used to judge whether a student will go to this university or this other university then we know like mathematicians have been studying this algorithm. And we know that, for instance, it has some nice properties. For instance, it leads to so-called stable matching. So I won't go into the details, I guess, but essentially it has these nice properties like incentive compatibility as well if it's going the right way. And this can only be done if you have a known algorithm. It's very hard to predict the properties of the mood of the crowd of a single human. Yeah. Now, having said this, there's one limitation to the law which is the fact that it has to be written by humans and we humans are very smart and all but we also are limited in our connection and we often have trouble imagining cases that have never occurred before in the past and also like the world is getting more and more complex or it's getting harder and harder to design the right algorithm to judge in case in societies. And that's where instead of just writing things down we often rely rather on the brain of a judge than on the law as it is written. And this has an explanation again in terms of computer science, particularly in terms of what is known as the Solomonov complexity, sometimes also known as the Kolmogorov complexity which is defined as the shortest algorithm in terms of like the description of the algorithm. So the shortest texture of law that is able to do what we want to be doing. And there are some strong arguments like from Turing in particular that there are many things that cannot be shortened like there are probably a good texture of law does not fit in a book of two or 300 pages and maybe it does not even fit in a 1000 book of 1000 pages because the world is just really complex in a very meaningful sense. In this case, we cannot have written laws that contain everything. We have to do something else. And this was proposed by Turing in 1950 and it's the idea of doing learning. So instead of writing everything's down we bring you to learn through experience what ought to be done. And this also occurs in the case of the law and it's known as choice prudence. And by the way, this occurred also in science and it's known as the scientific method. Like before the revolutionary idea that we call the scientific method started to be of course it existed for at least 10 centuries but it took off, it took off with the Galilean revolution Kepler, Newton and so on and then we started inferring rules of nature from observations. And this is where human knowledge took off because it just scales more than having someone sitting down like a wise philosopher and then stating the rules of the universe. Yeah, it doesn't scale. Yeah, so the case of science is interesting because you have the text of the laws of nature that have been written down more and more and we have improved these algorithms as we move. But then we also went further, we asked ourselves how should the text of the laws of nature be written? Like how should we come up with the right laws of nature? And this led to sort of meta-algorithm or there are still algorithms but they are learning algorithms. They are algorithms about how to find out about the laws of nature. And this is also known as epistemology. So that's why there's also a natural link here between computer science and especially learning theory and epistemology and philosophy. Learning is, well, it took off. The bottleneck of learning was really to have a lot of data because you need a lot of data to do learning but you also need a lot of computational power to do the computations of the learning algorithms. And also you need like a large memories and stuff like that in machines to do these things. But once you have all of these things, it turns out that learning is much more efficient than well, you can do this in human brain as well but like learning is much more efficient than just writing down the text of laws because writing down by humans is just too hard. But it also comes with some disadvantages like you compare it to the empty set sometimes. So there's no algorithm that humans have written that's able to recognize a cat with 99% accuracy for instance. So when you're saying that the algorithm to recognize cats has some flaws for instance, it's not as transparent. It's not really transparent. Well, you're actually comparing this to the empty set in a sense. But still they could say now we have these algorithms or these judges that learned from past experiences what they should, how they should judge in the future. And these algorithms are now too complex to be studied using the mathematical tools that we usually use for small algorithms. And so it's harder like there are more black boxes there are harder to understand. And this is a limitations but it's a limitation that's inherent to learning and that's like you cannot do without learning at least for some tasks. And like this creates just new challenges. Yeah, and that needs to be to be faced the verification of learning algorithms of algorithms that have learned is much harder than of algorithms that we designed to be analyzed. Maybe again here. I don't know if Louis wants to say something about the learning. Go ahead. No, no. Just wanted to conclude this part with like again a simple takeaway message if you're just like first time exposed to these ideas is that like maybe the takeaway you should keep here is that hand programming rules does not scale. Like that's maybe the main key insight of Turing is that we could not sit down and start writing rules like if this do that, if this do that, if this do that and produce a smart set of rules, a smart algorithm. Our reason is a set of rules. So Turing realizes that if we want to speed up the programming of an intelligent algorithm we need this algorithm to be adaptable. So it has like if this or this or this and this like those conditions are could be tweaked could be modified. Depending on what observations have been made. So modifying the conditions of the if and s with respect to the observe like I observed that when I do this four times I get this results but then I had a new experience where I only need to do it three times. So maybe this four, the parameter four could be moved a bit down. And then I realized that 3.5 is on average better. So the algorithm needs to have some parameters that can be modified depending on the observations or the experiments. And Turing argued that this is faster, more efficient and this is realistic. We can have a program within our lifetime that becomes intelligent in some sense of intelligence like realizing an objective if we let it learn from data. And if we want to hand program it it will take us a very, very, very, very long time of writing rules. And this is more or less what happens like you take a task like image recognition for four decades or five decades people were trying to come up with handwritten rules. Okay, if there's a polygon like this and then you look at the shape and this is the nose, et cetera, then this is a nail. And if the polygon is like that and the nose like this and the mouth and I don't know the ratio between and just like making up. Then it is probably a one and this did not work. But if we feed an algorithm many data points and let the algorithm like have what we call a learning algorithm so that it can change the parameters. Now we achieved, of course we're not yet there but we have, for example, algorithms in Facebook that recognize faces and they recognize that this is Louis and this is Katrina and this is et cetera. And those algorithms clearly we could not have obtained them by handwriting if this do that, if this do that. But then we just let them learn from data and this idea is very old. It's from 1950, it was stated by Alan Turing. And this is the key idea behind learning. Learning just scales better than programming. So if we want to write algorithms, if we want to write laws, we need to complement programming by learning and sometimes we mainly need learning. And arguably in the context of law, it happened also. We call this jurisprudence. I think in English also jurisprudence in French where like you observe cases and you make up rules based on cases that pleased everyone so to say. We observe that when we punish a killer with this punishment, there is no riot. They're like, everyone is happy with this punishment or almost everyone. When we punish the killer with this punishment, people are not satisfied, the family of the victim are not satisfied, clearly this law need to change. So this is a learning process, writing law itself is a learning process. And this is again another point where law and algorithmics meet just like they met initially thousands of years ago. Another point I'm thinking of would correspond much to the first section of the podcast, but is that no, I'll give you two ways to think about the way we write laws. This is a discussion I've had with Gilles Dewey who told me this which I find very interesting. Like essentially you can think of the law as either an algorithm, that's what we've been discussing so far. Another way that people sometimes feel like writing the law is as specifications, like this must happen, this must happen, this must happen. This is not an algorithm. Like this is just the things that you want your decision to satisfy. And the annoying thing with this law with specifications, but the good thing is that it's arguably easier to write specifications like you can just say, oh yeah, this should be like this. But the trouble with the specifications is that well, sometimes the set of specifications describe an empty set meaning that all the specifications want to put, you want the law to satisfy means that there's no decision that can satisfy all of this decision. And that's why, yeah, I think it's at least interesting to not just stop at specifications, which arguably is a lot of what people are doing when they discuss guidelines for AI or if you call it AI constant, they will say that the good AI needs to satisfy this, this, this, this, and this. And I think it's useful, but it can only be seen as a first step because like eventually I think we need an algorithm to know what should be decided and not just like what the specifications are. Yeah, it's something I find interesting. Also like if you have an algorithm, you can also analyze like other things like it's computation time because we know from like enjoying enjoying the hearting point that just determining if there is a solution to a set of specifications, if there is a decision X such that this and this and this, this is a statement, it's a conjecture. And we know from Turing that determining if this conjecture is true or not or has a proof or not is undecidable in general. Meaning that there's no algorithm that achieves this all the time. So that's another argument for why we should think in terms of algorithms rather than just specifications. The third tone to discuss is probabilistic thinking which is clearly critical in the case of the court of law. Even though it's been forbidden in the UK after some, the thing is the problem is that a lot of people are including myself are very hard, have a very hard time thinking probabilistically, so it is just very, very hard. But arguably it's also very critical. So the way sometimes things are phrased in the context of law and of science is people would talk about proofs. And if you think about this, well, proofs are only well-defined in mathematics but in the context of science or in the context of law, what we have is more evidence like we have data essentially. And based on this data, we can infer we do the learning from this data, we infer what is more likely to have occurred or not. But you never get to any point of certainty because it's always possible that there's some like explanation that we have no thought about that's much more complicated. And actually these more complicated or unforeseen explanations are arguably quite frequent in the case of the law. So you need to take into account this uncertainty and you need to reason with uncertainty to come up with decision. So instead of saying if the person is guilty then we should do this and saying if the person is innocent then we should do this, which sounds very good. But in practice, you never get to this state. Like you should think in terms of like, well, even how likely it is that he has done this and this, what should be decided for this person? This would be much more probabilistic thinking. And you might think it's very weird in the text in context of law, but yeah, sometimes you just don't have enough data and it becomes even more critical in many params. For instance, that involve a lot of uncertainty, for instance, for the COVID situation. So what should be answered when someone is searching COVID vaccine, COVID-19 vaccine on Google, for instance, this is a very, very complicated question because also we don't know so far like how long it's going to take to have the vaccine, how dangerous the vaccines are going to be. Are they going to be produced a border at scale? There are lots of open questions and what you're going to reply today to these questions is very important to prepare the population for what's coming next. And so you need to make a decision right now, despite the huge uncertainty on what's going to come around like in the next month. Yeah, one example for this one was the, one of the latest study on the hydroxychloroquine that was retracted a few weeks after. So because there is a possibility that when you see a study, it was actually not the high-quality information that you expected, but sometimes quite often it is actually the high-quality information that you expect. But basing a decision based on this kind of evidence, which as Lesse is not a clear proof that will tell you 100% what is the behavior to adopt. So you should treat this as an evidence knowing the possibility that it was actually, there was actually mistakes on the process of creation of this evidence. And that's why also a much stronger evidence that we should look at is things like meta-analysis or the global context in which the whole science is produced. And without this probabilistic thinking in mind, then we get into a mistake either being absolutely convinced that the authors of this study are trying to manipulate the result due to conflict of interest or being on the other side fully convinced that the hydroxychloroquine treatment is absolutely bullshit. So there should not be any, we should not be at any end of these extremes. Like we should consider every piece of evidence as something that moves slightly our probability estimate of what are right decisions to take a given the situation. Yeah, yeah, so there's a lot of work and it's very hard to be fair that we need to improve these are kids really critical for better decision-making to improve in terms of probabilities, probabilistic thinking in particular estimating more correctly the probabilities of different events. And then there's this other side of probabilistic thinking which is now that you have this in society, what should you do? And one thing that is very hard but you really should really be done is to not reflect only in terms of the most likely scenario. Like it's very tempting to say, well, I believe this and you forget that you doesn't mean that you fully believe it and there may be like a 5% chance that the alternative scenario occurs. And this is particularly critical in the case of pandemics, for instance, because if you were back in February or January, let's say, 2020 for those who watch this in the future, then there were different scenarios and maybe you could imagine that the more likely scenario for the COVID-19 outbreak but then not yet a pandemic was that it would not be a pandemic. And maybe right now you could say that maybe in 2021 there's like probably the most likely scenario is that there's not going to be a pandemic of another virus or another disease that's much worse than the COVID-19. That's the most likely scenario. But we should not think in terms only of the most likely scenario and we should prepare for the possibility that things go bad. And particularly we should prepare for this if the probability of this thing going very bad is not too small. If it's one person, arguably, of something extremely bad, it's already huge. But if it's like 10 to the minus 20, well, it's negligible. And there's a big difference between 10 to the minus 20 and one person, but it's very hard for us humans to make this distinction because we tend to confuse, to consider that these two are just unlikely scenarios. Yeah, maybe to illustrate the difference. If something that has one person chance to happen every year out of 1,000 years, it will nearly happen for sure. But something that has 10 to the minus 20 chance to happen every year out of 1,000 years, it will nearly not happen for sure. Yeah. Nearly for sure, not. Yeah, yeah. And so the decision-making has to exist into account and sometimes the safety mindset, trying to make sure that you compute the probabilities of very, very bad scenarios. And if this probability is not that small, then you should prepare at least a plan for if it occurs, and maybe even plans to reduce this probability. Mm-hmm. Maybe also, so one thing about probabilistic thinking that is really overlooked, including a people who work in probabilities. The same thing we keep saying about us, people working in computer science, we neglect how epistemologically deep concepts we have in computer science can be and can be applied outside computer science. So I would recommend the book of Brian Christian and Tom Griffith's algorithms to live by, in which illustrates this fact. Actually, for probabilistic thinking to live by, there is a 200 years old book written by Simon Laplace, which is called Essay Philosophiques sur les probabilités, Philosophical Essay on Probabilities. And it's like in some chapters, you could see the preliminary version of, for example, a lot of the works that has been done in the 20th century about cognitive biases, for example. And Laplace called, like today we call them cognitive biases, Laplace calls them illusions in estimating probabilities. And he illustrate that with Leibniz, for example, being biased towards what is common and what is familiar to him and what has been told to him in his childhood and trying to see it in phenomenons that has nothing to do with the, like Leibniz once wrote to the Chinese emperor, with the Chinese emperor like Max, and tried to convince him of Christianity using a phenomenon in series, like sums of series by telling him, look at like you can have one out of zeros and this is creation. Just like, and then Laplace goes on more brilliantly than what I just have said, like I'm totally not reflecting how clear the statement of Laplace was, just like showing like how much when we're like used to something and exposed to something during our childhood or during our life, we tend to be biased for it, for confirming it and seeing it everywhere we look. He could also like, and then also like he's giving examples, like for example, he's given the example of slavery and ghasts in India as something like people normalize because it's common and then he goes on to expose why frequency, commonality are not valid epistemic arguments. So if something is frequent or if something is common, that doesn't mean it is okay, either morally or epistemic. It's like commonality is not valid epistemic as an epistemic argument or as a moral argument. And of course he also does a lot of connections with moral philosophy. And unfortunately this work is really overlooked by people who work in polities. I was never told this course. I discovered it 10 years after my, more than 10 years, 12 years after my undergrad, where I studied polities. Yeah, maybe I'm not going to make a lot of friends by saying this, but I think this is the best book in philosophy ever written. It's really, really fantastic. Like I really highly, highly recommend it. And I just want to say to quote, well, let me choose two sentences. I was going to say one, but I mean people put two sentences from this book. The first one is, the theory of probabilities is basically just good sense reduced to computation. Wait. Yeah, I think it's fantastic. But it's a very bold claim, if you think about it. Like hear more good judgment than good sense or common sense. Yeah, maybe good judgment. Translate it into common sense. Translate it into good sense. Well, good judgment maybe is better, but yeah, it's a matter of transition. But I think this quote is like really, really, like it's really, really bold. Like I think it's, and I think it's very, very accurate. I think it's like really a lot of food for thought. It's really also aligned with what we've been saying. Like it's computation. You can reduce things to computation. You've done like 99.99% of a job in a sense. Well, it still had to reduce it to effective computation, I guess. But it's really, really important, I agree, this book and the theory of probabilities and his insight. And just like the other quote I wanted to give is that there is no science but other than the sense of probabilities, more worthy of our meditations and whose results are more useful. Yeah, it says better what I think. So maybe just to conclude, we'll illustrate all of what have been said now with the, well, of course, like we will be superficially discussing fairness and privacy. There are very, very, very complex problems. Like, and unfortunately, some researchers tend to feed them as like, just like we can just tackle them with some solution and then they are solved. They're like privacy and fairness are not as tractable as, I don't know, let's say, proving convexity of a loss function. Like, yeah, sometimes we, like, those are very, very complex topics. So obviously we're not broadly discussing them or just like superficially discussing them just like through a narrow angle, which is how probabilistic thinking helped us in the past decade, improve the way we think about privacy and fairness. So maybe we can start by differential privacy, which is maybe more mature now, 10 years old or more than 10 years, like 12 years old at least. And then we can move to fairness, which is even more, even younger. But it builds up on some of the reasoning that has been made in differential privacy. So I don't know if you want to go with that, Lulee or Lulee? Yes, the idea of differential privacy is that when some of your data is, or it's going to be like, instead of releasing your data, you're releasing some noisy version of the data, such that any observer would be like unable to infer with high probability what your true data were. Well, it's not exactly that, but it has more to do with how much he can change his beliefs, like how much he can update his beliefs by having seen the data that you are releasing. So that would be like the probabilistic interpretation of the concept called differential privacy, which has been the leading, one of the leading concepts for privacy over the last 15 years. The other one being this concept, well, the other big line of thoughts in terms of privacy being this complexity related, like you cipher your message such that no observer that has limited computational power can learn anything from your message. Yeah, so what's important to see here is how actually probability has come up into a corrective, this very interesting definition of privacy. And not only as we were discussing previously, the binary definition of private or not private, but something that is as much private as possible by showing as few as possible bits of information about yourself, bits of information, meaning how does someone change what he thinks of me based on the data he has in coming from me and the less bits of information, the less someone is able to update the probabilities of what he think about me. Yeah, it's also interesting because you can then think in terms of trade-offs. And there's a lot of research about like, if all data remains private in differential privacy terms, then well, you cannot learn anything from the population and this can be a problem in the case of the pandemic. You do want to know like things like what is the fraction of the population that currently has the COVID-19? This is really important information to have to know whether you do log down or whatever. But this information for individuals or for every individual can be also a cause of concern for this individual. So you want to learn but not too much about your population in a sense and differential privacy gives you a way to write down this trade-off and to compute depending on how much you care on controlling the pandemic and avoiding deaths and how much you care about privacy and surveillance. Like, yeah, you have a natural framework to choose your trade-off. Yeah, it is an additional argument against simply having least of specifications for algorithms. And so if you have two specifications, like the algorithm should avoid depth and the algorithm should remain private, then what happened when some specific decisions can avoid depth but by being intruding towards privacy? Should we take these kind of decisions that don't fit one specification but are imposed by the first specification? And the right answer to this, I believe, is to think of it in terms of trade-off. So somehow having an estimation of how important it is to avoid depth, how important it is to avoid privacy and it's great to have a measure of how much actually are we invading privacy with this differential privacy definition to allow, to take decision in a proper way when facing these kinds of dilemmas. So what's interesting also in terms of fairness is that the basic idea of fairness is typically you'd want to guarantee that for two different sub-populations, like some, for instance, the same rate of job offers is done. So the probability of having a job, of receiving a job offer, for instance, given that you're from this population or given that you're from this other population, maybe should not be too different. So this is called group fairness, where you compare fairness between two groups. And there's another idea of fairness which you can think of, which is individual fairness, which essentially means that you are treated given all your data, so every picture of you is rightfully taken into account. And so this would be more about your probability of getting a job given what is known about you, what is publicly known about you, for instance. And it turns out that, and you may say, yeah, like every individual should be judged based on his competence, for instance. And you say, also, like there should be no discrimination between different groups. So you may want both individual fairness and group fairness. So there will be a specification approach. And it turns out that you can prove mathematically that in many cases, or in both cases, these two are incompatible. So the specification approach to fairness would be, well, they cannot have all versions of fairness at least simultaneously. So then you need to specify this more. And again, like to compute the trade-offs and what we will mean by fairness, the language of probability has to be very, very relevant. I had like a few things to add, but I think like just like maybe the takeaway from this part is that it's like, again, we're like very superficially tackled especially fairness because people are just realizing that it is a scientific question. So, of course, it is a socially very important question. But also that it is a very highly scientific question that could be tackled with the scientific method. Maybe just like a side note, sometimes we still like in some communities of the topic is not as highly regarded and as I don't know, proving some conversion speed of stochastic gradient descent on a convex loss function. And I believe this is not, this is something that is not okay. For example, in the machine learning community, to like disregard research in fairness as non-technical, actually it's a, first of all, it's a very highly technical question. And if we go back to the beginning of the podcast, algorithms, like the researcher that gave us the name algorithms was actually trying to improve law. He was a lawyer and he was trying to make law more rigorous, more transparent and this is how we invented algebra and algorithms by trying to improve law. So working on fairness is extremely relevant for computer science and it's extremely, it's an extremely interesting research topic. So just like if there are like grad students watching this podcast, please don't disregard this topic. You shouldn't, it's not like just like, about like please don't do it. Like if you are doing it, you are wrong. Like if you are doing it, you are really disregarding a research question that is highly technical, highly subtle, highly complex and unfortunately seem like very respected researchers disregard the topic. And that's not, that's a bit sad and unfortunate. I hope like now it's just a generational problem. Personally, I didn't experience that with younger researchers mostly. The old generation of computer scientists didn't question, didn't look at these questions as maybe what's happening today. So maybe it's just a generational problem that is going to improve itself with time. There is a very good book written by Aaron Roth and Michael Kearns. If my memory is good, Aaron Roth was a PhD student of Sinti Advok. Sinti Advok is the researcher to which we owe differential privacy. So Aaron Roth also worked on differential privacy and Aaron Roth and like she did, so Sinti Advok had a lot of very relevant research and actually has a very broad portfolio of questions that she tackled. She worked in distributed computing. Initially, they were typically computer science but she also gave us the formalism of differential privacy. And now with the researchers like Aaron Roth and others, I'm not sure there is no growing community around the Fairness, Accountability and Transparency Conference by Etienne. We could list, we could go on with the biggest of research, like you just like the conference and look at the proceedings. I don't want to name a few and not name the others but this is obviously like this is a question where probabilistic thinking is very helpful. If you look at the statement of differential privacy, it is a probabilistic statement. So I think we have a video on, at least there is a zeta by video on differential privacy, you could look it up. And the same now applies for Fairness. It's not something you define in a binary way or in a formally like it's not something you can do with first order logic. It's something where probabilities are not a luxury or a necessity. So with that, I think we can just drop up and go. Good. So we'll see you next time. Bye.