 Hello everyone. Today we will discuss the philosophical aspects of computing. We will base our discussion on one long essay wrote by Professor Scott Aronson, titled, Why Should Philosophers Care about Computational Complexity? We will start by introducing, bringing some context to this discussion, which is not necessarily in the paper, about what is actually computing, how the science of computing was born from foundational philosophical question, and then we'll review some of the questions tackled by the paper, namely complexity and between tests, like how to define complexity and like new examples of proofs that are of philosophical interests that computer science could inspire. So let's start with maybe what is computer science and what is computing? I can talk about the history of computing, at least back in the 1920s and mostly 30s, people were asking a lot about logic. It was the crisis of logic, the foundational crisis of logic. So what do we mean by that? Well, we simply mean that it wasn't clear back then what logic was. It seems like this very basic concept that Aristotle solved 2000 years ago. And Aristotle did say a few interesting things, but in the late 1900s, people found out that what we meant by logic was not clear at all and that you could easily come up with paradoxes, the most famous of which is Russell's paradox, that essentially shows that it's hard to make a consistent theory of logic. And then in the 1930s, there was this guy called Gödel that showed that it was essentially impossible to make a really nice theory of logic. By really nice, we mean consistent, like there's no paradox and also complete, like everything that is true has a proof, something like this. And this was all interesting. It was more about true or false. But then in 1936 in particular, well, it started before that, but in 1936 in particular, two brilliant scholars named Alan Turing and Alonzo Josh, independently, like the two worked independently. But each of them was asking not only about like, is something true or false, that it has a proof or not, which are technically slightly different questions, but they also asked the question, like, can we find the proof? Is there a way to find the proof? And if you think about it, this is actually much more, arguably much more important, because if you can prove that something has a proof, but that cannot be found, well, this is quite deep. Like it's about, it's also much more practical, because it tells us about the limits of what can be done in practice. And this was a lot about logic and also language, like what can be said or what can be explained. So like Alan Turing has a more language-based theory of logic, and Alan Turing more of a procedural, of a mechanical-based approach to logic. But the truth turned out to be equivalent. So it's quite a miracle. Also maybe important in this part of the discussion is to mention one of the dogmas of mathematics, probably best of sciences in general, probably best summarized by Hilbert Scott. We must know, we really know. This dogma prevailed in a lot of, you can find it in different forms in the historical literature of science, that if we don't know, we're just not good enough, and one day we would know. If we can't prove something, or just not good enough, and someday we'll prove, or we could also prove that it doesn't have a solution, but we will do it one day, once we prove, or so it's just a matter of time. And this is how Hilbert told, summarized this in the famous Congress of Mathematicians. And actually the real birth of modern computer science, as we know it, is a negative answer to this question, through the decision problem, which is very easy to take. To simplify, as you say, like you have a question, could you decide what is the answer to the question? You have a conjecture. Could you have a proof that the conjecture is true or false? So it's called the decision problem, and actually Alan Turing was just trying to solve the decision problem, answering it by the negative. And while doing that, he realized that actually a lot of the questions could not be decided, and there is a subset of questions that we can define as the decidable ones. And then he proved that a question is decided, and like it can be, sorry, a question can be decided, if and only if it can be decided by a step-by-step procedure. So it's just a thought procedure, and there's nothing concrete in it. It's called the Turing machine. So it's a, the Turing machine is a, it's called the machine, but it's just a step-by-step thought process. And that all decidable questions can be decided by the Turing machine, and anything that cannot be decided by a Turing machine are not decidable question. And as Aronson also said in the introduction of the paper, and many people say in other contexts, actually technology, like the computers, as we know them today, came just as a byproduct. And I think it's very important that it's not enough state in curriculums, that the starting point of computer science is a philosophical one, and that the gadgets and the technology are a byproduct. And then this is also why this discussion should happen again. Also like another point, maybe not like a bit overlooked by Aronson that we mentioned in our book is that if you look even older, like if you look even further in the past, in the 9th century, the birth of algebra and algorithms, like those are two fields that were, at least in their actual form, were born in the same book. So the book that gave the name algebra by the author whose name gave us the word algorithms is actually motivated by a lawyer's job in solving complex inheritance cases. And in that book, in many instances, you find the word, the same word for judgments and computing. So computation and judgments are referred to with the same word, hesab, in Arabic. It's funny that the day of judgment is called the day of computation in Arabic, yom al-hisab, day of judgment, day of calculation. And so yeah, it's just not an irony of the Arabic language, but the very act of computing. We have a lot of examples in history where it is motivated by a judgment, by a decision. So just like the decision problem that Turing wanted to solve, al-khawarizmi, so algoritmi, was trying to tackle the question of judgments in inheritance and how to make fair judgments. And we can even go further in the past, like all the way to Babylonians where Calculi are the small stones that we use to compute, so that we have social interactions that are based on fair and transparent judgments. And so I like, one question raised by Alison in the paper is that there seems that there is now very little engagement between computer science and philosophers. Maybe there is a social explanation to it, that in the 30s you have strong interactions between people like Russell, Gödel, Church, Van Neumann, etc. And then World War II happened where we had to push for the engineering part of computer science. And then post-World War II, computer science was kept in this engineering bubble, despite the theory happening inside it, but very little engagement with the external world of natural sciences and physics. So as much as computing, so computing can be defined as the science of decision making, the science of processes, like what we will discuss later in the topic is computational complexity, which is the subfield of computer science. So as much as the birth was, could we decide, could we answer a question or not? The theory of complexity, the science of complexity that was born out of that is again a science of processes where you ask questions like how many steps to solve the question, how many to decide the answer, how many things to remember before deciding, how many examples you have to see before you learn the answer, or how many characters you need to describe an object. And I think Louis has a very interesting anecdote about so the last aspects of complexity. Yeah, so this thing that you refer to is the question that was asked about measuring the length of the coast of England. And when doing this, what we observed was that depending on the accuracy with which you draw the line of the coast of England, you get extremely different answers from the length. So if you look with an accuracy of the kilometers, you would measure, you would observe a quite smooth curve. But if you use meters for the accuracy or even millimeters for during the curve, then you realize that the curve of England is extremely chaotic, extremely complex. It's going all the ways. So somehow it has a fractal behavior. And the length is very different than if you measure it with the kilometer accuracy. We can insist a little bit more on the importance of computation. So computation is an algorithm is just a sequence of things to do to achieve whatever some task. And one of the most important algorithm that we try to apply to advance knowledge is the scientific method. If you think about this method and algorithm or synonyms, it's all about like having a step-by-step procedure to achieve a goal here would be like advancing scientific knowledge. And it's quite somehow disappointing how rarely these two all come together. Like how rarely people think of the scientific method as an algorithm to be optimized like any other algorithm. And yet there are a lot of very important questions that you can ask because once you frame the scientific method as an algorithm, then you can ask all the questions we usually ask about algorithms, the most important is, of course, does it achieve what it would want it to achieve? And for the scientific method, if you state it in a certain way, if you, for instance, take the p-value framework, then you can actually ask, like, will it always validate the true theory and refute false theories? Well, the answer if you analyze it is no, not at all. It actually has quite bad properties. But there are other interesting questions that you can ask. One of them is the so-called sample complexity. That's how we call it in a computer science. It's the number of data points that you need to come up to a good conclusion. And this is extremely relevant if you think what we talked about, multiarmor bandit and adaptive clinical trials a few episodes ago. And this is really a matter of sample complexity. Like, we want to come quickly to a good decision. But there are other questions like in terms of the number of steps. So if you have this solution to a problem, like it happens sometimes in mathematics where, for instance, if you ask me to compute the biggest prime number, a number that's a prime number that's bigger than the currently best known, largest known prime number, then I know of a method that will succeed for sure. I can write down the method and it's just computation. But the problem here is that the number of computation steps that I will need is, for this, the basic algorithm is exponential in the number of digits. So it's not charitable in practice. And this is a huge limit that we need to take into account when we're thinking about the scientific method. This is the complexity limit of the algorithms. So it's interesting. Like, it's good that you mentioned the scientific method as an algorithm. And actually, bring us to another question, which is, I think part of which there is not enough engagement between philosophers, epistemologists and computer science is I believe both communities not realizing how broad the definition of algorithm is. I have this feeling that like computer scientists don't realize how precious the epistemological toolbox they have at hand and how general it can be and how much it could be used to talk about other objects that are, at the end, algorithms. Like the law, modern law is an algorithm. People prefer to be judged by an algorithm, which is written law, rather by then what's happening inside the mood of a random wise of the village. And like law is an algorithm, the scientific method is an algorithm that we follow to establish theories and rule out other theories or increase the impedance of other theories. But by looking at the scientific method as an algorithm, I believe maybe our generation could be part of a new revival of epistemology where we could bring some of the things that were developed in computing and theory of computing and use them to actually improve the scientific method and make it more computationally aware, more algorithmic, more aware of its own limits in terms of resources, of complexity, of simple complexity, as you mentioned, memory, complexity. And just again to illustrate how broad the notion of algorithms is and thus how broad the science that studies them is. It is a famous quote by Dijkstra who said that computer science is as much about computers as physics is about the science of the telescopes or astrophysics astrophysics is the science of telescopes. So telescopes are nearly a tool that is used by astrophysicists, but it's not the pre-objects by astrophysicists, but the code could be helpful a bit to understand what's happening in computer science. So there's a process, this process is hopefully going to an end to a result. And we want to study is the process feasible? Could it terminate? Is the input that we gave it enough for it to terminate, et cetera? And by considering these things, you can understand why the epistemological part of computing is really overlooked. There is now like there is a growing effort among many scientists in computer science to raise awareness on this. Like for example Leslie Valiant who is the 2010 Turin Award, he famously says computing is a natural science. Computing could be received also as a natural science that studies processes that are happening in nature. Many systems in nature can be examined through a computational lens, like evolution, like Darwinian evolution is a wonderful process to study under algorithmic lens and actually that's an important part of Leslie Valiant's career, but also other people's work. Learning is in the brain, is now increasingly seen through computational lenses and there's also all the science of complex network systems, social interactions, all of these are processes that could benefit actually from incorporating in their methodology. Not only the toolbox, like when you think of social scientists interacting with computer scientists, the first image that would come to most social scientists I talked to actually wrote a paper with a social scientist on this question. Like the first thing that comes to the mind of social scientists is we will get some computer science, so we will get some computer science person and they would code for us and crunch the data and draw us very nice graphs. So they systematically think of the technological toolbox and they almost never think of the epistemological toolbox and like we had a paper, so we decided to write a paper to explain why actually social scientists also care about complexity theory, maybe just to like to conclude on this very long introduction and contextualization. So all of these aspects that we mentioned, maybe one of the other explanations why this is not happening well enough is that there are not enough career incentives for young computer scientists to engage in this discussion. I don't think the essay wrote by Scott Aronson helped in his tenure track case. And so I don't think that like something as good for your CV as I don't know a paper at a stock or Fox. And so there are not enough career incentives and as a philosopher Oxford philosopher Nick was from jokes when he like he said that the some of the most pressing and neglected foundational questions are always like left as a discussion between a retired physics professors. And like it's always bad for a young person to engage in them because it's a loss of time and it's not helping for for for their career. And actually one of the cornerstones of our project and this channel and all what we do in this group is to to take part and be part of this effort to strengthen the bridges between computing and epistemology. Yes, that I am a Louis what you want to tell us about the complexity and the touring tests. Yeah, thanks for the introduction and I'm ready. So this paper changed my mind a little bit because I was very focused on the notion of computability with a problem as a as a solution as an algorithm. And I wasn't thinking enough about complexity. But so first of all in the introduction of the of the paper, it's explained why complexity is extremely relevant. What's important to know about about these first is the differentiation between what what's called polynomial time algorithms and an exponential time algorithms. Imagine a so to make the difference between polynomial and exponential on an input of size 1000, a polynomial time algorithm would need one billion iterations. And today's computer can do one billion iterations in one second. But if the complexity of the algorithm is two to the end, then the time needed will be more than the number of atoms that there is in the whole universe for simple input of size 1000. So somehow the question of whether there is an algorithm to solve a problem or not is very interesting. But a more interesting question that is a more practical question is whether there is a polynomial time algorithms that we will be able to run and see finishing to solve a problem. If there is only exponential time algorithm, then it's about the same as if we have a no sufficient to solve a problem. To with this in mind, he also mentioned that this complexity theory is very relevant for all kind of tests we want to do concerning algorithms. One of the most famous tests for algorithm is what we call the Turing test, whether an algorithm will successfully be able to imitate human behavior, human conversation. And simply stating the problem, actually, we can come up with a simple solution. So for whatever test that you design, we could simply imagine a huge database that contains all the answers to all the questions you will be asking in the test that will convince you that the test has been passed. And simply by looking in this used database, very simplistic algorithms could successfully pass your test. But then, so where is the interesting question here? The interesting question is that we would like, so we've said that the Turing test, there is a simple algorithm that can solve it simply by looking through a huge database of answers. But what we would like to build is a bounded solution to the to the problem. So an algorithm that is quite small, not bigger than a human with the size of a human brain to pass the test. And also an algorithm that is quite fast. We don't want an algorithm that will take billions of years before being able to answer every question of the Turing test. So, yeah, is there an algorithm that solved the Turing test? Obviously, yes, there is, there exists some, but is there a fast and a small one? This is a more relevant question. Yeah. So what's really nice is that you bring this question of the Turing test, because there's a lot of discussions and debates about whether the Turing test is actually an important or relevant test or not. And if you only look at it through the lenses of computability, well, the answer is no, it's not that complicated of a problem, like I just said. But if you, as soon as you add the primes of complexity in particular of memory space here and of time complexity as well, then it becomes a really interesting problem which is still open. But before claiming that the computer cannot solve the Turing test, well, you have to show that there is no algorithm that is of small size and able to pass the Turing test in a small amount of time, which is a very interesting question, difficult, but a very interesting question. Yeah. Another example where complexity mattered was for proving mathematical theorem. So we've said in the introduction that whether a theorem has a proof or not is an undisputable problem. There will be some theorem for which we cannot know whether they have proof or not. But one question that is computable and very relevant also is, is there a proof of that theorem with less than one million steps? And for us, it's nearly the same question because whether there are proofs that have more than one million steps is not extremely interesting. And we don't care much about the few theorems that have proof that are that long. And this question is decidable. There is an algorithm which simply enumerates all the proofs with less than one million steps and will terminate at the end to tell. So this is where the notion of decidability fails because here this is a decidable problem. But the problem is that the algorithms we propose for this will take billions of years to run because it has an exponential complexity. There will be an exponential number of proofs to explore. And the interesting question here is, can we build a polynomial algorithm, a fast algorithm that would be able to answer whether there is a proof of size less than one million for a given problem? Yeah, I think complexity theory makes these problems a lot more interesting in a sense. Like the computability version is still interesting. But it's limited, especially in terms of applications, but also in terms of what you can think about. It's not only what you can do on the computer, but it's also what you can do in your brain. Because if you have a fast algorithm, maybe you can compute it by your brain and you can solve the problem yourself, which adds a lot more to what you can do, which is very interesting. And there's a lot of examples of this. You can mention a third example, which is like games, especially the game of Go or chess or games like this, where it's very easy actually. If you forget about complexity theory, these problems are extremely easy. It would be crazy to think that if you forget about complexity theory, it would be crazy to think that these are AI problems, because you can write an algorithm that's 10 lines or maybe 100 lines of codes, and that's going to solve this problem. Well, how would the city test all the possible games that you can play and you can backtrack to see which are the moves that are always winning, something like this. So what makes these problems challenging has like nothing to do with computability, with computation, or just computation, say, but it has to do with complexity, computational complexity. Can we solve this problem in a reasonable amount of time? And this makes the problem a lot more interesting. And this may be too often neglected in discussions about what the brains can do and computation and what the computers can do and stuff like this. We usually think about what it can do in the absolute. And if we forget about the complexity part, we're missing a big chunk of a very interesting story. Coming back to the scientific method example, also like science, like natural sciences, chemistry and biology and medicine are actually full of problems that you can just solve using a lookup. Let's just try all the possibilities. But then no one in medicine is full enough to try all the possibilities, because most of them will kill the patients. So you can think of the patients as a finite resource. And then a number of patients that you can have access to to try a drug. So you can't try everything on everyone. And you have limited, you can think of patients as memory. And then the time it takes to test them and look at the results as also time. And then you have the two key ingredients, memory, complexity and time complexity. And like the scientific method is an eternal struggle with time complexity and memory complexity. You know, everyone has the resources to do a lookup and exhaustive search on all the lists of all possible molecules that you can ingest. To move on to the logical omniscience problem, which is my personal favorite part of this paper, which I found very, very enlightening. And it all has to do with one of my favorite quotes by Alan Turing, the great man, that opens the article of Aronson. So if you don't mind, I'm going to read it. It says, the view that machines cannot give rise to surprises is due, I believe, so this is Alan Turing talking, to a fallacy to which philosophers and mathematicians are particularly subject. So to say that if you've never thought about this, probably you're subject to this to this fallacy, at least according to Turing. This is the assumption that as soon as a fact is presented to a mind, all consequences of that fact spring into its mind simultaneously with it. It is a useful, very useful assumption under many circumstances, but one too easily forgets that it is false. And I think it's extremely like this is what this is the heart of complexity theory. It was historically this is an answer of Turing to what he called the ideal of less objection to the possibility of machines being intelligent, because ideal of less had this idea. And it's a very common idea that machines are just doing what we tell them to do. So they cannot surprise us. They are purely mechanical. So they cannot give rise to surprise. We know what they are going to do, because we know everything about these machines, and they are purely deterministic. And the reason why this thinking, which is very striking, but the small bits that's missing in this thinking is complexity, is the fact that in order to know what the machine will be doing like after one billion steps of computations, the problem of predicting what it's going to be doing after one billion steps of computations is extremely hard, because it's not clear at all that all of these computations can be, it's not clear that there's going to be a short cut that you can predict the result without doing yourself all the steps of computations. I would just like to bring some justice here for Ada Lovelace actually just to put things into context. So as much as it is completely irrelevant for Ada Lovelace to say that quote, as much as it is completely irrelevant for someone today to use it for today's machine. So Ada Lovelace was, she was a very brilliant mathematician in the 19th century, right? 19, yeah, 19. And so she was working on computers, actually machines back then, and those machines missed the key ingredient we have today, which is complexity. Like machines with which you do addition, like you add numbers, you do like multiplications, I think, also, and then you try to solve problems that are very tractable. So you know what will happen, you know that I gave the machine this number, I gave the machine this other number, I know that the machine will do the subtraction, for example. So there's nothing to be surprised with. And I think it is important to put the context of Ada Lovelace, she was talking about this kind of machines, like very basic 19th century machines, for which we know like that everything is tractable. And so it is not very fair, actually, to use that quote for today's machines. You can't, not from you, I'm like talking about like, like why Turing thinks it's a phyla C, because like in the epoch of Turing, it was another epoch. And we were starting to see the beginning of complex machines for which we cannot foresee the results. Like, of course, Ada Lovelace also could not foresee the results, but she knows that the result would be the subtraction of the two numbers. I don't know the exact result. But I know what kind of result to expect. It's another number. And the nature of that number would be the subtraction of the two inputs. And with Turing, we started to foresee machines that will process operations that are more complex and like we cannot even foresee the nature of the results. Yeah, so I don't mean to criticize Ada Lovelace. No, no, no. I'm actually criticizing the fact that something like this is an orphan. I'm just like, I'm just not actually criticizing no one. I'm just like pointing out that the fact that Ada Lovelace's belief is very well grounded in the 19th century. But it is today, it is surviving in our minds as an orphan belief. So it's an orphan belief we have from the 19th century, where machines were tractable. Yeah, yeah. But what I can, what I strongly imagine is that in Turing's time, it was extremely hard also to foresee what Turing saw. Like, Turing was extremely brilliant. Like, this is probably a Saturday night right now. But the guy is actually brilliant. And he had this imagination to think of machines that were radically different from the one he was constructing himself. Just also again, out of context, just like you said, Turing is great. Just like if there is a chemist or a biologist watching us, like actually until very recently, the most cited paper of Turing was the paper on the chemical basis of morphogenesis, where his initial motivation is to understand what happens in biology so that we have complex patterns, embryos and from single identical cells. And just like to illustrate like the brilliance of Turing is that he also foresees that we could use the mindsets of computing to think of complex systems such as the ones in biology. Yeah. Well, to be fair to Turing's contemporaries, Turing did have a lot of machines that he did play around a lot with these machines. So he, in one of his papers, he talks about, he's surprised by some of the results that came out of his machine. And yeah, and that's something that I think is extremely deep. Like for a lot of crimes, you can say, well, this is trivial because I know how to do this. Like, it's just a sequence of computations that if I were started, I tried to do it myself, I would eventually get to the result. So like, there's this sense that you know the algorithm because you know the description of the algorithm. And what Turing is saying is that there's still this gap because, well, to know what the algorithm is going to do all the time, its code is not sufficient. Like, because if it has a lot of computations to do, these computations are impossible to predict unless you do yourself the computations. And that's the idea of, so sometimes what the opposite of that is called logical omniscience. So logical omniscience is when, if you know the building blocks, the axioms, the starting the basic knowledge, then you know all the consequences of the knowledge. For instance, if you know that A is true, B is true, then we usually assume that you know that A and B is true. And this sounds like, yeah, of course, if A is true and B is true, then yeah, A and B is true. But sometimes, like we don't make this conclusion, it's still a computation that needs to be performed. And the really difficult part is when there are now 10, 100, or 1000 of these computation steps, A implies B and A is true, then B is true. Now you can imagine A and B and C, and there's a whole web of connections between the different variables. Then computing the, the consequences are all of this is still just the computation. So you should know it. But in practice, you don't, because it's still a computation that needs to be performed at some point. And if you're not doing this, then you don't know the answer, the consequences of the axioms. And I find it extremely deep and inside truth. For instance, you can think of the, the sequence of prime numbers. Like, do we know the sequence of prime numbers or the digits of pi? For instance, the digits of pi is like a very famous example, because a lot of people will say that the, the digits of pi are random or like, don't know anything about them. But it's like, we do know a lot about them. We know an algorithm that computes the digits of pi. So if you forget about complexity, we know, like if you did not have complexity in mind, then we'd be meaningless to say that we know nothing of the digits of pi. We know exactly how to compute them. There's no surprise in the digits of pi. But this is for getting complexity theory. And as soon as you add complexity theory in the mix, well, you realize that, no, actually, I don't know the 1000 digits of pi, even if, even though I could compute it, but I don't know it. And I think this is the deep insight by Jansen. And so, so what he proposed to do is to somehow redefine what it means to, to know something. And you will have things that you, that you know, means that when, when you are asked the question, you can immediately give an answer. And for all the rest, there will be things that you will need to take some time before being able to answer. So, for example, if I ask you, if 91 is a prime number, it might take you a few seconds or maybe even one minute to be able to answer. And there are things for which you, you cannot at all give an answer, even spending a hundred years trying to, trying to answer the question like, what is the, the next largest prime number after the, the largest prime number that we currently know? Unless you, you are working in the field and you're an expert and looking for these prime numbers, it will, it will take you a very huge amount of time to find one. So, yeah, really, really finding what it means to know something using a complexity theory. So, and we would say that you know something if you know a fast algorithms to compute it and that you can, you can, with a reasonable amount of time, answer a specific question. Yeah. This is a fantastic insight. Like, do you know, is to know a fast algorithm? This is actually extremely deep. I think about this is not knowing just the answer, because usually you don't know the answer to everything. You're still using an algorithm just to organize your knowledge in your brain. And, and then there's no like instantaneous knowledge. There's always a computation that leads to this, even if this knowledge is in your brain already, then there's the question of how fast this algorithm is, that is critical to, in the sense, there's different level of knowledge, like, the faster the algorithm, the more you know what you're talking about. So, do you know what will a molecule of Benzene combined with a few milligrams of fluid from someone who has the coronavirus added to the grams of iron we give? This is an example where I don't even know the algorithm to answer your question. The algorithm is just to mix them and, and, and wait. Yeah. So, so, hopefully, we'd like, we'd like a fast algorithm to know what would happen without running the chemical reaction. Yes, the same, like, I know, like, I have an algorithm to generate the 1000 digits of fiber. I don't have a fast algorithm, except running, like, I have an algorithm to follow, which is very painful. And then after billions of whatever operations, I could find that 1000 digits of fiber. But the same, like, do I know the reasons of Benzene with coronavirus and, and the few grams of iron? The fastest thing I know of is to just mix them and wait for the chemical reaction to happen. And, and, and knowledge is like, you can also, that's why I actually, Aronson's definition of knowledge is very brilliant. Real chemical knowledge is to have a fast algorithm that will tell you what would happen before you make the room explode. So that's like, you know what will happen without running the chemistry. Yeah. Yeah. But, but here, here, there's a bit different, because you're talking about algorithm that involves like doing experimentations and instruments and so on. Whereas like what I think Aronson has in mind is more, is more like your, all you have is like a data structure. And you, and you know the algorithm to extract information from this data structure. And, and still there's a different, like it's not sufficient to have this algorithm to know this algorithm. You actually need a fast algorithm. But yeah, you can extend this then to, to, to things that are not just the data structure in your brain, but the data structure in your environment as well, and different tools to, to, to retrieve data from this environment. And this becomes an extremely important problem, which is very connected, strongly connected to the scientific method. Yeah. To me, this chapter in the paper helped me to, to better understand the difference between the pure Bayesianism and the practical Bayesianism. So the, what sort of Bayesianism we can apply ourselves and how to, how to properly think like a Bayesian. So the formal definition of Bayesianism suffers from the, the problem of logical omniscience. If you tell a pure Bayesian, the rule of chess, of chess, you will know from these algorithms probability that white is winning equal to one or equal to zero from the standard. It doesn't, the base formula does not take into account at all the time it requires to compute this formula. And also, when I, when I started thinking more about Bayesianism, so the, what's important is making observation and updating what you know based on these observations. But in a, in this pragmatic version where you include complexity, there's another things you can do. You can simply not make any observation, but stop and think, run algorithms in your head, and you will update your knowledge with, without making any observations simply by making more computations. So in some sense, maybe this computation can be seen as a, as a specific types of observation we are doing to, to help us update our knowledge, but definitely this computation are required and there is no way around it. Yeah, yeah, this is one of the greatest insights of thinking about all of this. And because when people talk about Bayesianism, there's a lot of flavors of Bayesianism that are mixed and it's not really clear what is exactly meant. So I like to make the distinction between what, what I call pure Bayesianism, which is just applying Bayesianism, just follow the rules of probability and having nothing else. And this clarifies a lot of things, because as soon as you use this pure Bayesianism framework, then you can prove a lot of beautiful theorems and everything works out wonderfully. The only problem is complexity theory. Just applying Bayesianism even in simple settings is usually exponential time, if not even worse. And so Bayesianism cannot work in practice because of complexity theory. And this is also a deep insight, a very fantastic insight. But then you can ask what should we do in practice? And that's why you need to mix complexity theory and Bayesianism to see what kind of approximate fast Bayesian algorithms, Bayesian pseudo Bayesian algorithm you can come up with. And yeah, this is a very open field. There's a lot of proposals and it's never clear what is, which is the best and probably a combination of all of this is probably best or worth exploring. But yeah, it's really like when you talk about what, how should we think a pure Bayesian would say, well, just apply Bayes' rule. And now we, that we include complexity theory, you realize that it's actually not a simple problem anymore. And it's a very complicated problem. But it's a very fascinating problem. Just take whatever suggestion and add to it for how much. Just like, yeah, just apply Bayes' rule. But then, yeah, as long as someone proposes a solution like that, just ask her or ask him for how much, for how long, how many times, how many observations, how many iterations, how many Bayesian of Bayes' rule. Yeah. Yeah. And I would love one, because I think that quite often when people talk about Bayesianism, they, these can, these two concepts are a bit mixed and sometimes not quite well. Like what it is that we would want to do if you had all computational power, like no complexity constraints. And what it is that we come up with, given the complexity constraints, and what we can prove. And it's going to be pseudo Bayesian, of course, because like pure Bayesianism cannot be applied. And I wish this were more often clarified. Another very interesting idea in this, in this paper is the idea of a proof. So I think the word proof is, is very interesting. And it's, it's often abused, let's say, what it is that we mean by a proof, especially outside of mathematics, is a bit complicated. But inside mathematics, for a long time, it seemed very clear what we mean by a proof, like we have a logical framework, we have these axioms. And a proof is a way to show that the axioms imply a certain theory. And usually you would write this proof, like in first order logic. So, like this was all of the inventions of the late 19th century. But what Armson shows in this paper, well, it's not Armson that really, like there were lots of papers that he, he cites, is that you can think of proofs in many, many different ways. If you think of a proof as something that greatly increases, but like hugely increases the confidence that some statement has a proof, or is true, or that, or even in cryptography, it's useful to, to think about a proof as something that someone says to show that he knows something. So this is typically going to the case when you're trying to authenticate yourself on Facebook, on the website, you're trying to prove to the website that you have some, some passwords, some secrets that, that means that you are indeed the person you're trying to log in has. Then if you think of proof as this, then you have this, all of all sorts of other kinds of proofs. You don't have to write all of the proofs yourself. You have these so-called interactive proofs, which can be a lot more efficient to prove something. And even better, you can come up with proofs that have a very interesting properties, like for instance, zero knowledge proof. So the idea of zero knowledge proof, you can think of this as a sort of test where people test that you know, but they don't test how you know, which is a bit amusing. And using this, it's possible to prove that you know something, but to give zero clue about what it is that you know, which is quite, quite spectacular if you think about this. And it has a lot of applications, especially in cryptography, of course. Yeah, there are two ways that complexity theory intervenes here. So for example, in the case that you describe of zero knowledge proofs, it could be that the proof you have came up with is extremely long and would take a lot of computation to simply transfer to the person that you are transferring the proof. But if that person can simply ask you a small number of questions, yes or no questions, to verify that you have correctly proved what you are saying your proof. And you correctly answer these questions with only a few steps. Like, 10 questions is enough to have a probability of one to the thousand that you are actually lying. You will get by chance correctly 10 questions only if you are lucky with 0.1%. If it's 100 questions, then it's nearly impossible to get 100 questions right and correctly with that money. So somehow it reduces the complexity of transferring a large proof with simply this small amount of data. And the second aspect that where complexity theory and proofs are reliant is about cryptography, where we know cryptography serves as proofs because we know that the only algorithms we know to crack it are exponentials, would take too much time. So this is the only complexity theory is the barriers that makes cryptographic keys suitable for proofs. Yeah. So even in interactive proofs, at least in some interactive proofs, what you give in a yes or no question, and technically you could answer this question without knowing the secret, but just by doing a lot of computations, typically in graph isomorphism. Well, I'm not going to go into details. But the barrier to, well, if you know the secret, then you can compute quickly the result. Whereas if you didn't know it, you would take a long time. And this is how we leverage complexity theory to provide this zero-knowledge proof guarantee, which is pretty remarkable. And without complexity theory, you will not have this possibility. Yeah. And there are other quite remarkable things. So you can sort of apply this to something called probabilisticly checkable proofs, which are short proofs that can be verified by only checking 20 or 40 bits of the answer. So you can imagine that you have a proof that's supposed to be one billion pages long. Well, this algorithm, instead of reading your entire one billion pages long proof, it will just look at a few letters in your proof. And while you do it strategically in a random way, in a smart way, but just by looking at only a few bits of information with very, very low complexity, it will be convinced that your proof is indeed valid or not. This is really, really remarkable as well. I'll conclude with a personal anecdote. Actually, I mentioned this paper written with this professor of social sciences. So I was excited about the projects. And I totally, yeah, luckily, I'm writing this paper with a social scientist. And in my part, I'm trying to make a point. And I actually were even doing like what social scientists love to do, which is have a fieldwork. So our fieldwork was on Kagglers. So we looked at how Kagglers use data science to solve topics like questions that are of interest to social scientists, like housing bubbles, like housing market questions and other very complex social questions. And so actually, the part I was most involved in is making this case that if you want to leverage computing, the computing toolbox, like the technological toolbox of computing, you also have to understand the foundational toolbox, the complexity theory part, and etc. Because that will give you insights about how to look at the problem, like not only how to crunch data, but also how to look at the problem itself. And how to look at the solution itself is the solution complex, is the problem too complex for the solution, does the solution require too much observation for what you could afford, etc. And I told Lea that like one of the motivations for me to collaborate in this project with the social scientists was to raise awareness outside computer science, that computer science is full of philosophical and epistemological aspects. And Lea told me, maybe you should start by doing this effort to find computer science itself, because I don't think a lot of computer scientists are aware of this. So I don't know what Lea would like to add on that, and why he said that. Well, yeah, I think he applies to each and many different people, but like what we've discussed, I think he's relevant to all sorts of people, but I do think that there's a lack of promotion and understanding of these overall these concepts. Maybe one thing I would like to do is that here is like, we've discussed mostly complexity theory, what Aronson's paper is only on complexity theory. But there are plenty of other intersections between computer science and all sorts of interesting things. And Aronson's mentioned them, mentions them early on in the paper. One of them is language, like we think of language as something, well, linguistics or sometimes literature or something like this. But computer science has a lot to do with language. You talk about law, for instance. What's interesting is that law before it was written, well, it was not written. And the fact that it got written means that it was translated in a language in a code that was then readable, shareable, transparent, analyzable. And you can prove it. You can improve upon it. You can improve upon it. You can criticize it. This is really, really fundamental. And this is a core part of computer science and of designing a better computational system. You need to communicate with the computer, with your collaborators, and so on. And this is a second point, which is distributed computing. And in particular, what are many aspects that are interesting in distributed computing? How do you minimize communications, for instance? But I don't want to insist on this PhD topic, which is Byzantine Resilence. Byzantine Resilence is critical as soon as you have complex systems that interact with one another. And this is what's happening in economics. This is what is happening in law. This is happening in all sorts of organizations. Then you need to be Byzantine Resilence, meaning that if parts of your system, your institution fails, if some people have to quarantine because they have the COVID, then your system needs to keep working, hopefully, as well as before. Actually, I'm happy that you mentioned this part, not just because my PhD, but just like I was also happy that Aronson mentioned the distributed systems. Just going back to the scientific method, so remember we mentioned the scientific method as something that should be looked at as an algorithm. Actually, looking at it as a distributed algorithm is even more relevant if you think of the context today, where we have controversies. Something we don't like in distributed systems, so something that should not exist in a robust, like if a distributed system has to be described as robust, it shouldn't have single points of failure. So there is a distributed system, a lot of parts are interacting. And if there is one critical part such that if you removed it, all the system fails, then that system is not robust because it has a single point of failure. And something the scientific method implemented to become a robust distributed algorithm is to get rid of single points of failure. And those single points of failure is the argument of authority. So you can think of the argument of authority as a single point of failure because if the authority is wrong, all the system is wrong. And we shouldn't listen to just someone because this person, she or he, is a very famous spirologist or medical doctor and then follow her or his opinion. So the scientific method already behaved as a robust distributed system and it learned, and it's only a few centuries ago, to get rid of the argument of authority. Another thing that you mentioned here, it's like distributed systems, the scientific method. So I just explained that having arguments of authority is a single point of failure and makes the scientific method weak and for agile and not robust. So the modern scientific method does not have arguments of authority. And then I also mentioned law in the beginning, the same. Like in the beginning law relied on the wise, like the wise old man of the village of the wise old mother of the family or of the village. And this is also a single point of failure. So you don't want law, like you don't want judgment to rely on the mood of a single person or something you cannot externalize and review and read and improve and modify. You can't modify the brain of the wise old man of the village. You can't read their brain. But you can read the law and you can vote on modifications of the law. So actually writing law was also a process of getting rid of single points of failure, which were the authoritarian leaders of the importers. Yeah. And the last point I wanted to mention is privacy. So computer science is also the science of information, of what information is transferred and also how to limit this information. So it's important to have a more quantitative approach to this kind of understanding because if you like privacy can be breached in many, many different manners. And I've had sometimes discussions with the jurists that were arguing that a data like a system is private or is not private, like a very binary viewpoint on privacy. But what there's a lot of research that has been going on on privacy and how to limit information. And the definitions of privacy whether multiple and they are not binary. You can actually quantify, at least for different privacy is the classical example. So there's an amount of privacy. And this is like critical to organize society to know what information is lit indirectly or not. And yeah, so I think just like this is I think created, I guess created a lot of two politics, but this is just to show again like the surface tension between computer science and so many different fields. Also like besides the differential definition of privacy that you mentioned, that is not part of Aronson's paper. So just like something that Aronson wrote that could also be applied to privacy is like what is it what is it to know? Like do you have a fast algorithm to get the knowledge? Like if you have a procedure to know what is my secret part of my private life that I don't want you to know. If the best algorithm for you to know that runs in a million years, then you don't know that private part of me. So like the complexity, like using complexity theory to define privacy has also been a very important part of the last four decades of computer security. So we don't make things private, we just make them hard to guess. Yeah, we make the we make them such as to guess them to know them, it takes a lot of time, it takes a lot of memory. But now, as you mentioned, there is this more modern, which is only 10 years old or 15 years old differential privacy is was born in 2004 or eight, like in the early 2000s, which is not only like how many efforts, how much effort you would bring, but how much how much part of the information you'd access or how much distinguishability you would get on one person, depending on what group of people she or he belongs to. Scott Aronson in the conclusion of the paper says that he has been talking about lots of ways that a philosopher can get inspired by a computer scientist, specifically complexity theories. But he also thinks that he could write the same essay about ways in which computer scientists should get inspired by a philosopher. So we like we've been saying since the beginning of the podcast that we every field has to gain from learning about one another. But thank you for joining us today. Next week, we will discuss social media and polarization, specifically a surprising study that was run where showing opposite views to to to participate in social media was increasing the polarization instead of decreasing it. It somehow shows the difficulty, which is that to to make a robustly beneficial choice is that it's really something that should be studied and is not at all obvious. And we will discuss in more detail next week. Bye.