 is this all discussion about what is probability, what is randomness. But I want to discuss it from somewhat different angle. So I start first indicate kind of club I don't go to. And what I'm not going to talk is philosophy and psychology of randomness. Because there is lots of, partly they made the mathematician, partly by philosophers, discussion of what is random, what is not random, which is, has no, is not focused on any specific questions. And here on the country I want to discuss some very concrete question, both in mathematics and outside mathematics in applications. And so, but still first I start with few remarks on history, but this is a subject which we shall discuss. So these are three blacks which I will not discuss, where probability concept of randomness usually from my point of view is just meaningless. And people discuss it and say random like history is random, evolution is random, it's just words. There is absolutely nothing but an emotional weight attached to them from my point of view. And because there is no specific mathematical model behind them. So my point is whatever you speak random or whatever you have to have in mind mathematical model very different from the traditional one, but it might be mathematics. Any speculation, any speculative discussion without mathematics is in my view nonsensical. It may be preparation for mathematics, but either you have experiments or you have mathematics. Everything else is, I don't know what it is. And so the subject where, so two points, three points where mathematics is prominent and we shall discuss one of statistical mechanics and very similar, but in a way, simply in a way more subtle thing is formal genetics, which are not so kind of common. Then how probability applies outside of probability perception. Probability is a subject I know very little about, but I know a little bit how it works in other domains like in geometry and in computerics. And then there are applications where you have to change concepts, concept of, concept of probability apparently. And these are molecular evolution as opposed to classical theory of evolution. They kind of have similar words and they kind of apply to the same subject. One is philosophy and another is science. And molecular evolution by now became true science. Then there is statistical analysis of natural languages and learning mechanisms, specifically how you learn languages and mathematics. I'm not concerned about how you learn, you know, subject trivial, which is a lot of study, both mathematics at the end by psychologists, but this kind of subject you know anyway, but then how you learn languages and also how mathematics. The answer must be mathematical. It's not just words. It must be some specific algorithms. You have to indicate at least direction how to build specific algorithms made on statistical analysis of the data. And for that of course you have to understand mathematically what is a natural language. And by the way, okay, we come to that in a second. Now this is a kind of preparation. And now there are two questions which go along. What is entropy and is there non-Shenin kind of information? Because people speak about information in biology, right? These kind of flow of information in cell that as they people say, there are flow of energy and flow of information and this really guides biologists. But mathematically we don't know what it is. And it's very different from entropy of Shannon. Of course for that you have to analyze more carefully what is entropy of Shannon. And so what is entropy of Shannon? And this we shall discuss at the beginning. And now it looks in a few words of history. And strangely enough, unlike many other domains in science, it starts not with science, not with mathematics, but with gambling. And it can be traced as long. And actually these kind of cubes were found in Persia of that old. So 5,000 years. And people already understood, I think, more or less what we understand now. However, in modern time, this was kind of described by Cardano. These people usually, some people speak about Pascal, but he was one of many and I don't think he was. So Galileo, there is some writing of Galileo which was published later on which indicates he understood perfectly well everything was done by either by Pascal or written later by Huggins. But not probably the law of large numbers in full, in full generality. But if you read it already, Cardano understood much more than you can believe. And actually saying many things you would never imagine. In particular, he was justifying psychology of gambling. He was a gambler and he was an incredible character. And he was a great gambler and he explained what's so good about gambling. Psychologically, not any mathematical. But this is a minor point just. And then there is another point which is more scientific. And this is not often realized was coming, was coming, was coming, kind of, was so crucial for probability. And so it's running in motion. It's again amusing history. And so can you guess who said and when was that said? The Marker's. The Marker's. No, nobody's closer to the truth, yeah, than whatever we can say. However, you know, this was, as you know, some time ascribed, as I said, his name was Brown, who subtly did not understand it. And amazingly enough, Tito Cressius who said it, who understood it, and he understood in fact many other things. And what is, of course, interesting about him, he was not, he is a picture, that he understood many other things, but he was not really a scientist, was common knowledge at that time, yeah. However, how could it be? Because, see people argued about Brown, if who could see actually, actually Brown in motion. He, of course, was not the first to observe it. And his contribution was kind of minuscule. His name was, she easily remember, yeah, Brown in motion, only because of names. He was a very good microscopist. He was a biologist, and he studied cell and made some discoveries about cell. But this was his hobby, and he observing, and well, everything he was written about him was wrong. He was not doing what he was saying he was doing, and he has that tangential relation to Brown in motion. But what's essential about Brown in motion, that this was the source of actual accepting atomic theory in the modern time. And because it allowed analyzing that was needed for, for measuring, for computing with sufficient degree of precision of the Avogadro number. And who done that, you know, who was the man responsible for that? And, and it was, you never heard of the name, of course, about these names, yeah, but this is as described to Einstein. And this is his most cited paper. It's not a relativity. That was his major contribution to science according to citations, right? Because it's cited everywhere how you determine, and then the experiment made here, I keep forgetting this French great experimentalist, who was a few days, a few years later, made experiments using this equation, Einstein-Smolkowski equation for, for computing Avogadro number. But it's kind of ticking. Nowadays it's done quite differently. Not, not like that. So, but again interesting, now mathematics was that done before Einstein 25 years later, and then usually mathematicians called it to be in a process, though it was 50 years after it was actually done by Thiele. The name of Bachelor also, you know, you probably know the history. It was forgotten and then came back, but he was not the first. Yeah, apparently it was Thiele. Now on the internet you can find many interesting things. This I just, yes, find, find on the web. Maybe there are, there are all the sources, but this is all they found. And then, of course, these were, these words, by the way, how Jesus Lucretius could do that? Apparently some people doubt if Brown or people who, I forget his name, who, who looked before him six years ago, how could they, they had enough ability in their microscope to see it, yeah, because it's on the boundary of what, you can see in a microscope, because you have to see particles of the size about one micron, and on them you see a fact of when, exactly when many atoms hit them simultaneously, and that's marginally can be seen in the microscope with 1000, 1000 microscope. But how he, Lucretius could come to this idea? Because he couldn't see that. But what could he see? He saw, of course, particle of dust in the sunlight. How did it move? And he has nothing to do with, of course, Brownian motion, yeah, it's just, you know, convection and the turbulence of the air. But however, he made this, and that's very typical, by the way, this exactly like Darwin, yeah, what he was saying was kind of right, but on the base of Shay Nonsense. And for that, he's considered great scientist, yeah, if you see, specifically, if you, he explains something, everything is wrong. However, well, it's like, like Jesus Lucretius, in principle, it's still right, yeah. However, he is not, Lucretius doesn't give this famous Darwin. So he understood evolution already, as well as Darwin did, yeah, on the base, of course, of that. That's another interesting point, but we shall come, because it's, he's starting to be, oops, what the hell is this? Right, that he articulated this, and one point, which certainly opposite to what we can say from the other great thing is, that it's not numbers, which are kind of essential, because in 20th century, then part of them were dominated by growth index, and growth index certainly wouldn't accept numbers above two, yeah. I think two is the greatest number he would ever accept as a number. So, but of course, the numbers are great thing, and it was okay. And then a big step in conceptualization of probability, you know, what was that? It was Bufon, yeah. And again, historically it's amusing because Bufon, again, as far as evolution, theory concerned, I think it was understood much better, or at least he was hunted as early as Darwin, but then he's kind of, kind of stupidly ejected, starting from Darwin himself, and because, yeah, I was recently looking at the history of that, and it is completely distorted. In particular, problem with Bufon, he was saying all the right thing, but then immediately saying, no, no, I don't believe it, it doesn't really be the scripture. And they say, oh, no, he didn't believe in the revolution, he didn't believe into that. He perfectly had very clear idea, and not as detailed as Darwin, because some geology was not quite ready, and for him it was more conjectural. And because mathematician, he probably saw the flaws of naive kind of selection theory, which Darwin didn't. Darwin actually saw them, but he still believed it's true, and he was right, yeah. And then the point of application of probability in physics and marching mathematics depends on symmetry. It doesn't depend on kind of conception of chance, whatever. And that I will explain in particular examples later on. So mathematically introducing probability is modifying the symmetries. You start like this permutation group, you have equal points, and then you're linearized in a full linear group, or orthogonal group. And then mathematics become more interesting, and you can use it more efficiently. And one of the big discoveries in probability in the recent years was, you know, this is a Schramm-Löwne evolution equation, when it contains two crucial kind of ingredient of characteristic of probability. One is high symmetry. And secondly, that what you consider the random object are not just random and specified sense. They are parametrized by independent variables. And in this particular case, they parametrized by Brownian processes, which are kind of, in a way, maximum independently possible for compatible discontinuation. And so the Bufmann, again, he was throwing this needle, and actually, again, there is a folklore story, he was actually experimenting with that, he was throwing a baguette on the floor somewhere and see how they were positioned. And that was crucial point of the probability, because before that it was as a discrete. And there were computation kind of kindergarten computation done by great people, like Galileo or Pascal, but they were kind of kindergarten in a way, because they were not continuing, it was just counting numbers. And here it was integral formula, it was hard measure, it's entered. And then this point of view was taken over in a abstract context by Kolmogorov, and he said, aha, that's the only probability which exists. So any probability can be modeled in the following way. So I want to give an overall scheme what probability is in order to just, to move, move away from it. And so that's the logic of classical probability. You can say some measure space, universal measure space, I say square, interval is too small, I know, it can be too much. Events modeled by measurable subsets, whatever measurable mean, nowadays we know measurable is a questionable concept, because it depends on axiomatic, right? So it is measureable anyway, that's subsets. And its area is probability. However, if you look from, again, perspective of algebraic geometry, she is a fault of that, because it's exactly how Andre Weil was formalizing algebraic geometry. And we know it was dismissed in a decade or less than that by a growth and decay approach. And the point is, in algebraic geometry, it was a universal field. In traditional universal field, absolutely ad hoc, having nothing to do with object you deal with. And it's kind of convenient for me to do it. And again, my experience when I was learning it, I took it also so nice in nature. And when you do what growth and decay does, it looks absurd. But it's exactly because growth and decay was really seeing better than students and then Andre Weil. It was really the right way and this probability, what Kolmogorov done, in my view, is like this naive application of set theory. By the time it was done, you know, it was 78 years ago, it was okay. Nowadays it looks stupidly naive. And you need to change it. And in decay, I don't know how. It's easy to criticize. It's hard to make right thinking, right? You see faults, it doesn't mean you know how to correct them. So I indicate possibility of correction, but certainly there hardly will be of the level of growth and decay approach. And then, of course, there is a quite different problem. So one, just within mathematics and just even in the context of statistical mechanics. If you look carefully, you don't honestly use probability the way it described by measure 30 by Kolmogorov. It just doesn't work. You always make some little turns here and there and just pretend you use it. And then you start making computation. And then how you make computation doesn't depend on your background up to your point. Like from some moment on it may depend. And then in other domains, I mentioned them like languages. And then there is this statement by Chomsky. He was very powerful. Character. And he shaped up to a large extent modern beyond languages. And mathematicians sometimes say that a language is just a measure on the set of words. And this again, I think, super naive point of view, right? In just in many respects, it's wrong. It's wrong mathematically because it's not, language don't make a set. Actually Chomsky, I think this guy also say a very stupid thing. For example, they say there are infinitely, or you can't make infinitely many sentences in a language. Right? And that, again, a completely absurd mixture. And this is essentially the point I was discussing of this at some point. You mix model and you mix kind of the object. Because the total amount of sentences we ever can make on the hill in the, for the time of the universe, maybe 10 to the 20. Much less, of course. 10 to 15 maybe. Or 10 to 16. We don't have time. The universe is very small. Very short, time is very short. Infinity is, not infinity, it's a very small number. And it's essential for these structures. To understand them, these kind of structures, this also applies to what I was mentioning before, biology, like molecular evolution. The scale is essential. You have to, numbers are essential, but they are finite number and all your means must be adapted to them. In physics sometimes you can pretend numbers are infinite. And for good reason, because the groups are similar to very high. Not because the numbers are idealized. Because the point why you can say is meaningless. So there are many ways to justify it. And this is, one is that probability is very small. When you have a certain sentence, its probability will become 10 to the, you know, minus 20 or minus 15. And it's very small number. We have a sentence of the length of a dozen words. Probability of this actually appearing will be 10 minus 12. And this is a small number. But in physics you have a much smaller number in the, in the stitch chemical particles. You have 10 to the 26 particles. And each of them may be in, say, two states. So I have 2 to the power 10 to the 26. This 1 divided by this number. Right? So maybe I'll write it down. It's a very small number. We should say how this probability is. 1 divided by 2, at least 2, yeah? To the power 10 to the power 26. And this probability of a particular state of a system of particles in this room it will be probably, maybe 29, yeah? So the state of this room, particular state have this probability. Of course it's nonsense, yeah? But however you can use statistical mechanics. And interesting point is those numbers makes no sense. Right? You have this state. I don't know what it is. It is this state. And the probability makes no sense. But the quality of these probabilities still makes sense. And that's point in mathematics. Objects, numbers, and I mean English, it's okay. Their relation must be correct. You have to know how to manipulate them. And then you go on and on. But there's a starting point. There is symmetry in this, in between the particles. Because all particles essentially identical. Therefore you can speak about this identity without knowing what the subjects are. And this will be kind of essential part in many, in many application of probability of what I will be discussing at the end of my lectures. Yeah. So these are two new ingredients which come to probability from 20th century. Which we're not available to call Magorov. And one of them categorically is a language. And secondly, an understanding analysis. And actually both of them very close to what Bolsheviks was saying and thinking. He did Bolsheviks and translated to modern language. This is what we see. People were translating him, of course, in 19th century, in the beginning of 20th century to the language available to them. And sometimes saying, well, he was saying not quite right. You can't, that mathematics doesn't fit up to anything. Because mathematics was not ready. That kind of, now in mathematics is better adapted. And there may be something else. In particular, the concept of the entropy is a definition of Bolsheviks which had in mind, I guess, of entropy and what you can find in elementary physics textbooks. Entropy is log of the number of states. And just decide for that. It's not writing formula, right? So when you write the formula, mathematicians make kind of double mistakes. So you write the entropy is minus some pi log pi. And kind of very proud giving the definition. In my view, this is a psychological phenomenon. In the audience, people who have never heard of the entropy. And you just say, ah, that's the formula. Ah, why? Because I'm smart, I know the formula. No, it's a bullshit. It's not a definition of entropy. It was computational formula invented by Boltzmann. And it's extremely useful formula, but it's not definition. Definition is, it's the number, log of the number of states. Right? And how to go from this to that, you cannot do it unless, for my understanding, unless you pass to more sophisticated language. And for this, and I explain it today, you have to take a categorical point of view. And in physics, it's exactly kind of, a categorical point of view is everywhere present in physical reasoning, in naive physical reasoning. And, but it was translated automatically into archaic mathematical language, it become kind of fossilized, it become kind of stereotype. The formula is a great formula, but it's actually, you know, who wrote this formula first? It was not Boltzmann, it was Max Planck. You know, there was nice exchange between Max Planck and Boltzmann. Max Planck wrote this formula. And Boltzmann suggested the discreteness of the energy. Quantization of energy was the idea of Boltzmann. Yeah, it's interesting. Because Boltzmann was obsessed with the idea of quantizing the world, of having discrete atoms, and also he believed energy was discrete. But he, he, he's, this is, actually you can find this even on Wikipedia. It's the kind of standard knowledge nowadays. Have access to knowledge which you didn't have before. It's so alternatives, and as we shall discuss, are the following. So one is growth and dig type of describing entropy. In this I can be a part of mathematics and understand well and explain today. Secondly, along that you define probability spaces also in the spirit of growth and dig. Because you see the problem is that every time in, in, in a class, it's a traditional application of probability to turbulence, whatever, you say, ah, this is over, over you, this is some probability space. And you do all this probability space, which is rather nonsensical. Exactly as while was doing his algebraic geometry. You read this field and you take points in this universal field. However, now it is a concept of functor of points. It's a function. You take domain and depending on domain you have your points. And the same in probability. Depending on domain you have your point. And this means you can see the function from some simple category to category of sets. And, and once you do that, measure three in my view become extremely transparent. There is nothing to prove. You just always back integral tata tata become tautology. Because, because categorical language immediately tells you what you have to check. And checking is usually trivial. But it gives structure extremely nice and simple. And I will explain that. And then the other point is, which I understand less, is kind of a large deviation. What is the right setting for large deviation? How it goes, why, and understand analysis and geometry. Some trick I can't explain, some trick I don't quite understand. Both in classical and quantum of von Neumann entropy. Because one, we shall see understanding this formula. Well, there are two ways to think of these two ways to come with this formula. And, and I'm one of them immediately brings in von Neumann entropy. And this suggests different, again, linearization. And again, from my point of view, that's because it saw some mathematical equations, though it looks absurd physically. When you replace measures by something linear, like some homology, or rather, co-homology. And, and another subject when you apply, apply that, try to do a classical kind of probability to something unruly, such as languages, learning, or even molecular biology, you see that just you cannot assign numbers as probabilities, something else. There are kind of numbers, but not quite numbers. And, and that's kind of the last, whatever, whatever. So what I did, now is cut and paste, very easy. I make this spacing from some of my articles, which you can find on my website, and that's the names, where there are more details. And now, I want to go to entropy. So, before making. So again, I like quotation, yeah, because they are, they are, give some, because whenever you want to say something, you check and you see it already has been said. And of course, as you know, one of the point of modern science is that you cannot understand nature by pure thought. And I understand exactly arguing with this point, that reality, that reality can be grasped by pure thought and up to a point, he was right, but then he happened to be wrong, because quantum mechanics didn't work. And, and then, again, quantum mechanics is unclear what reality is. And that's again a very interesting point. And so, how to make, you can make mathematical model of reality, but what the hell is reality? And then the last great man who said something about Alexander Rothenrich said, you know, to understand reality, you have to understand what zero is, yeah, so to speak. And that, that's, let's try to go from zero level and try to understand entropy. So first, this is a physical language. We want to translate to the Rothenrich kind of language what is entropy. Right? And so just say what physicists say. So they say, we have a system, right? Whatever it is, mathematicians will say, oh, it is a set, all states set it up. Why set? Set is just language, and when we can't, for certain purpose, it's not, it's very flexible language, but it's still not the only language. It doesn't mean that it's a real, real state, yeah? When I was saying about that, each state having this probability, of course, they're not real state, yeah? These numbers, even the physical, real numbers, they don't have physical meaning. It's just up to your point. You play with them, but they're not, they're a real number, but they're not physical reality. And actually, there is no such thing for physical reality. Right? And so that's how we can, this is a kind of physical preparation of that. So you don't have this physical state. What physicists do, they make experiments. They have this, like, I prefer to speak about crystal, but maybe a continuous system. And they make some measurements. And the measurement is described by a protocol of how measurement is made. And this, by the way, is typical for all experiments which usually disappear when mathematicians look at that. It's a protocol which being used. But the additional thing, maybe to make sure you remember protocol, they don't know the result, because mathematically, description of the result of experiments following a protocol described in the language of two categories. And of course, people who do applied mathematics don't like two categories. But this piece is automatically used in this language. That's another point. The categorical language is the most, or even two categorical language, by far more primitive and simple than usual language mathematicians. The traditional language developed by calculus, whatever, it's super sophisticated thing. For example, if you want to study the basic property of entropy, for example concavity or convexity of this function, then that depends on knowing what is the derivative of log. But if you look categorically, on the mean, if you use the right definition, it's just the only thing we have to say that it is kind of natural definition function. And then all properties, including basic property of logarithm, would follow. That's kind of amazing. You don't have to make computations. And this again, in the spirit of Grottendijk, you don't make computations. Emphasis of what Max Verwas saying, you make computations, thesis on quantum mechanics, say you make computations, you don't think in philosophical terms. But I think you have to think in mathematical terms. And this is what's happening, actually, in development of physics. People study thesis, now think as mathematicians. And then, but on a much higher level than the one which I will catch you doing. And so there are this machine, some equipment, you make some measurement, and so what you observe, either something, in a way, something happen or doesn't happen. Something blinking on your screen or not blinking, and just you count how many things happening. And from that, you want to define entropy of some incomprehensible thing like a crystal, which has no states whatsoever in the sense of kind of physical point of view. So again, so one thing which you avoid saying is that physical system of so many atoms may have some space of states having so many states even in the discrete sense, whether they are kind of black or white. And there is a lot of kind of confusion with that even just one typical discussion, even you know, in this context now quantum you know this paradox of Schrodinger cat. Schrodinger cat being kind of secretly poisoned and you don't know that if he is in the state of being dead or in the state of being alive. And this is exactly kind of also confused because there is no such thing as state. State is a mathematical word to describe in things, something, and the description is not adequate even in very simple situation, not to speak about quantum mechanics of course. However, how to speak about that? And now let me now explain that and this will be a simple mathematics. Maybe I go one step ahead of myself, so one and two. So this sum, so this one is functorial which I am going to describe, functorial description of that and the sum is analytic but analytic in a different spirit because this I explained it but of course this part I don't understand well, I don't know it well. Not well, I don't know it at all. It's the phenomenon that many physical much of physics on a high sophisticated level depends on taking certain particular integrals, right? It's kind of you integrate something rather complicated to break expression and they come out of that and by now there is very developed theory of that integrals and that's an instance of that. So this function is kind of a remarkable integral and from certain point of view, so this PI numerically are numbers such that the sum equals one and therefore it's a function on a high dimensional simplex, right? So if you have this I belong to indexed I, so you have two euclidean states, R to the power I. This again, by the way, I use set theoretic language as a shoot and I insist on that because if you don't do it you immediately run into a mess. I don't know for example what this means when n is a number. This notation is certainly incorrect and people do that, what the hell it means? What is n? n is not a number. Here n is shot. Usually it means this set. But there is typically non-numeration by numbers. Traditionally you put these numbers everywhere where there is no numbers. Set theoretically you can power one set and then power another set. And that's a perfect definition. Why I insist on that? Because this is preserved symmetry. It's functorial and this is not. It's completely different category. Here is category of sets of eyes. Here it's best you can say it's category of audit set. It's okay but it's wrong category. It's exactly wrong in physics. You have so many particles. They are not enumerated. You have this particle, so many particles. It's not even set here, truly. But even if you accept this set it's not enumerated because the number of enumeration is this number. This factorial. It's even bigger than this number. So you arbitrarily take something from this number. So introduce a structure by overriding everything you do. And then it's produced a tremendous mess in the description. So the point of categorical language is really much simpler, much shorter. And I shall explain later on what the advance to that say in the case of that. But what's an energetic point about integrals? They're called period integrals sometimes. So I'm saying that this function is the simplest possible function in the n-simplest. In what sense is the simplest? Of course. It's having maximal possible symmetry. Of course, the maximal possible symmetry will be zero function or constant function. So this we reject. Now, analytically given a function on a simplex you want to characterize this from the point of your analytically based derivatives. But certainly gradient is a gradient. You can't see much about it. What you can see first, one kind of some function shows its features is when taking the Hessian. So it's a table I wanted to say, I wanted to say matrix. It's another by the way quite embarrassing thing in mathematics. What is the matrix? What is the matrix? Mathematician. You're a mathematician. So give mathematical definition with matrix. Can anybody give me a different matrix? It's a function. From the n. It's a function. But n, no, no. So it's a function on set. n is irrelevant here, right? There is no n here. This is a function on this set. So of course it's not a matrix and the Hessian is not a matrix. Hessian is a quadratic form. Because it doesn't depend on n. It just says that for any pair of vectors you take this derivative and this derivative and you have result and this will be quadratic. It happens to be written. But as for matrices, of course, yeah, what you say is okay. But it's not quite true because very often matrix is like that with unspecified entries. When you don't say what's in this entry. You say a matrix with some entries you don't know what entries are. They are not function in particular domain. So that's tricky point. So in a specific context you can say what you mean by matrix. But generally matrix you confuse mathematics intrinsically with the way you write it on the blackboard. And everywhere by the way in mathematics it looks a joke what I'm saying. But you cannot, for example, explain that. These two implies the question is unprovable. It makes no sense in mathematics. Unless you write on the blackboard. Because you know by blackboard which order they are written. If you don't have it. If you have no a priori order in your head. Coming either from blackboard or the temporal order. You cannot make sense of the sentence. And that's as we shall see maybe in the end. My lecture is essential of trying to model learning theory. How we learn things. And as you know small children don't just have no order in their head. It's very difficult to distinguish it. And all mathematicians make mistake reversing inequalities. You know there is an equality in the right and between sense. And it's not accidental. Because this is a artifact of mathematical notations. And it's not in our head and not internally in the mathematical structures. So that's another point. And again it may look a job but when it comes to the end of my lecture I see that without understanding that you cannot understand the learning mechanisms. And as you know for 60 years more or less there was tremendous failure with so-called artificial intelligence. We're making claim after claim that absolute is no progress. It is a lot of progress of course. Hardware in a sophisticated software but nothing kind of close to what was expected to be done. You cannot make any simple intelligent program. The old expert programs you exactly say what to do. And this one of the reason because we have a very wrong idea about how we think. And the first fundamental mistake we think that we think. Okay now so what about the Hessian. So a Hessian is a quadratic form. And then what will be the simplest quadratic form. And think about quadratic form as a matrix. I'm sorry as a metric. It's a Riemannian metric. What is the simplest Riemannian metric you can imagine compatible with the simplex. One of them of course will be just the one you see. But another one if you think about the simplex as a part of a sphere. So it's spherical simplex. And then you have this metric of constant curvature. So zero you cannot have. Right it will be constant function. But these in the next potentially simpler thing. The fact you can represent metric of that kind. As a Hessian of some function a priority seems absolutely unlikely. Because I mean this is the number of possibilities for function number of quadratic forms. Quadratic forms depends on n, n plus 1, n minus 1 keep forgetting. But square number of variables for n variable. Right. And here we have only one function. So very unlikely you can hit some target like that. However entropy does that. So entropy this is miracle. Right and that by the way another point of course about mathematics. When you play in this context it's always depends on miracles. Right. It's not illogical science unlike everything else. Against any common sense. There shouldn't be such function. Right. If you ask a priority is there function of course it shouldn't be there. How are we there? And this entropy. And in intrinsic symmetry of entropy is orthogonal group. Or rather the algebra of infinitesimal motion of the sphere. And this what automatically transplanted to quantum world. So this formula written by Planck is contained inside the germ of quantum mechanics. It's rather amazing. I don't understand why. Of course there are some. Now there is geometric commutations. And phi is to say they understand it. But we shall see there are lots of questions we don't mathematically. We don't have answers to. So there are two aspects of entropy. And now I make a little break. And so in the next lecture I explain how you define entropy in the style of growth and dig kind of categorical language without ever mentioning any numbers. Almost without mentioning any numbers. But on the other hand it will be kind of understandable in my view. In principle to a child unlucky. You don't have to differentiate. You don't have to know what logarithm is nothing. You just have to know what stones are. What water is. Right. Okay. So let's make ten minutes break. Okay. So I want to take now this categorical language. And I want to understand this sum. And so it's about these numbers. And these numbers are weights of kind of something. They are masses or masses. And so I imagine this is not necessary for definition. But the picture they are not numbers. They are drops of water. And total amount of water is fixed and called one. It's just called one. It's just the same amount everywhere. And then what you can do with this. And just everywhere you can do that. You can bring these drops together. And you have one bigger drop. And the others may not change. Or you can simultaneously bring two of them together. It comes slightly bigger. Right. And that looks rather obvious. However mathematically what you have. We have this P i collection of these atoms or drops which have weights denoted like that. And these are numbers. But from the mental objects are these drops of water not numbers. Right. However you can present them by numbers. And the point I'm making is because in some other cases they are not numbers. They happen to be numbers here. Okay. And you have morphism in this category. And what this is called. Reductions. By the way, wait a little. And this is a good word. But maybe before I said it you may have questions concerning my first lecture. I don't think there were questions. It was just general talk. Now you may have questions. And so you have the spaces P. So these are probability, finite probability spaces. And this what they are but again categorically up to some moment you don't care what they are. What you know that there are errors between these objects. So there are objects that are finite probability spaces. Actually they don't have to be finite. They may be countable with this to complicate it. And there are these errors between them. And they are exactly this process. So each of them given by set of these PIs. And so what you do some of them can merge together. And the way to assignment they merge there are some. Right. So there is object with weights and then there are errors. So numbers at the t- which is already encoded here. So the point of category 30 you already have in the rule of this. In this arrow you have this you have this arithmetic operation expressing language of these errors. And this of course category was good about them. That's universal language. And universality is kind of the source of all signs in mathematics including probability theory. So now of course they are very special category. One can say, huh, it just means that this space P is kind of greater than Q. Right. That Q is a reduction of P. And instead of saying there is more. If you remember his name, you can write like that. So what is advantage of categories? There are many of them. But one of them kind of evident. And partly has nothing to do with this problem how we distinguish this sign from this sign. In your mind. So there is such a conception as relative entropy. And relative entropy applied to the pair of spaces. So say well, you read this order, you read this error. And then you have to write something like that. Entropy P comma Q. Some people pay H whatever letter I write entropy. Because you need letter one letter only when you start doing complicated computations. But we shall never make any computation. Everything come by itself. So I can have this notation which is to remember. And so you have here three symbols. P comma Q. Guys, you never know how you can tell this from this. Actually, when this relative entropy, I mean myself, I always lost. Who is who? Right? Unless you write this error. However, if you use this notation, you just say entropy of F. And you have only one symbol. Moreover, you know categorically when you have something defined on objects, how automatically it passes to morphisms. And this typical category, you don't have to think. You don't have to define this relative category. It comes to you by the language of category three. So once you have idea of absolute entropy, automatically you have relative entropy with all these properties. Sometimes you have to prove it. That's another thing about categories. Not that they give you the proofs. But they tell you what to prove. And then you may even approve it. And then notationally, so you serve, you reduce three symbols to one. And that's not so bad. So your paper may become three times shorter. Because it carries lots of junk, usually notationally. And this happens for most mathematical exposition, for some reason, hard to explain. Most of what we write in the paper is junk, completely material. It's just brought there arbitrarily because we don't know how to say it well. We cannot express our ideas well. The best we have category three language is not perfect, but still certainly much better than set theory, infinitely better than analytic language. Of course. Which is just too horrible. It's not really understandable in some very, very, very specific, unclear what kind of sense. That's again the issue we shall come later on. What is learning and what is communication of mathematics? How it happens. Because it has little to do with logic, of course. But some should do with probability properly understood. So after this little kind of propaganda, so if you have this category, call this p. I don't know how to call. This is the category of where objects probability space and morphisms are deduction. What it corresponds physically, by the way. Immediately, it's very nice to have these various pictures in mind. One is when you have the drops of water and bring them together. Another, of course, more. Reduction has the following point of view. So you have this probability spray some physical machine. It's a physical system. I don't know what system is, but the system exactly to avoid saying it's a set or whatever. It's something you observe and you see some flashes of light coming there. And you count them and there are inside there are finally many windows. So I think divide them into windows. These windows are indexed by some set I and there are frequencies of something happening there. And you normalize them and you have this bunch of numbers. But then you can put some filter and just attach to this filter, but some other thing, which depends only on what happens here. And then you have another number of windows, maybe bigger, maybe smaller. I don't know. Actually, the one which is bigger there will not blink at all. But you have kind of a small thing and they're blinking here determined by those. And that's how you think about this reduction. And later on we shall see how with that you can define infinite measure systems and how you can define basic concept of probability theory and actually automatically prove them. Sometimes once you find them, usually proving proves very easy. Okay. So that's your category. And now I want to say entropy. Now a category has nothing, a priority, nothing to do with these numbers. It's just abstract thing. It's just not so obvious what category is and what definitions tells you because you can say, huh, it's kind of a graph. So there are points and there are these arrows and so it's some particular graph with arrows. But then there is, of course, extra point that you have a rule of composition of arrows. There are certain distinguished angles. And these distinguished angles say this arrow, composition of these arrows. And also there are distinguished arrows, distinguished loops which are called identity morphisms. And again it may look stupid why not just to say anything. But it's crucial for having right structures. If you describe something that doesn't quite fit in this setting, something to do is wrong. Of course sometimes category 30 is insufficient. But mostly you're still doing something wrong. It's amazingly, amazingly adequate language in mathematics and clear why. It works so well. So but the point is it's not a graph. It's not like that. It's something you do this what you do with this very different from what you do with graphs. Sometimes you do something similar to what you do with graphs. It's somewhat different. And it's hard to say exactly what it is. So from that point of view, you cannot give definition of categories. And it's another general principle. If you try to define some general concept in mathematics, you usually say something stupid. Because they are non-definable. It seems to me. I never saw any meaning for definition of function, of category, even of a set. That kind of limited definition. But what is the language? You cannot define it. But you can live without it. And that's again interesting point. I just several times was bringing it. When you say about function, that function variable, then for each x you have y. And these are real numbers. And this is a it's nonsensical. It's kind of a it's the only function of this definition to bully the audience. I never heard of that. Because the audience everybody knows. Everybody understand that these are two functions. Here is zero, here is one. But you say, I know it's one function. You can say it but it's stupid. It's two different functions. And so we don't know what functions. And this kind of maybe if you give a big audience, admit you don't know it, or else our professor is supposed to know it. But I think it must be realized that science in mathematics in particular it's not so much of how much you know, but how much you don't know. Right? Science is different from a person who doesn't know anything and understand nothing. And the common people understand so well. They actually find them at this point. You always leave in the state of full non-understanding of anything you meet. And that's okay. This is how we can make next step, otherwise you're blocked. If you understand it you don't move. It's a non-equilibrium state. Anyway, have this category. And now we want to define entropy. And it's supposed to be a number, but it shouldn't be a number. If you're in category, there are wide numbers. And so to read the following general construction, applicable to any kind of category, you can call growth and dig group or rather growth and dig semi-group. And the most simple, the simplest thing you can do, it's kind of describing a group where this morphine is F server generators, but I take class through them. And the basic relation is that if composition of these equals H, then it implies F plus G equals H. So you can see that groups generated by symbols corresponding to arrows. And you say to read this single relation. And now I say some words and when I decipher them, first I explain how they fit physics and how they fit mathematics and so what happened through that. And now the point is that because what I say what I will say now will be not literally true when I explain kind of that. Started through that. But it's true in spirit. I'm saying is that this, if you apply, take growth and dig semi-group, say, growth and dig semi-group of our category of finite measure spaces, then this is canonical isomorphic. And this might be corrected. Yeah, it might be. Make more precise. To the multiplicative semi-group of positive numbers greater or bigger than one. And this is a theorem. And then why numbers enter. When you take log of this number, you have entropy. And this log of this so you take, so it applies to or ready to morph, not necessarily to objects. When you have an object, you have this distinguished element when all drops come together, this particular arrow. So when you apply it to object, you apply it to this arrow. So it's immediately defined for arrows, not only for objects. So immediately define real entropy. It's much easier than absolute in this context. Yeah. And then there's a theorem saying that that this semi-group is a billion semi-group. It's canonical isomorphic to the group of real numbers greater than one. And you take log. And log is justified by the second property of this. Because formulas become kind of miraculously well. If you don't understand why, it might be like that. The deep mathematical reason for me is completely unclear. But that's a theorem. And you know who proved the theorem. Who proved the theorem? I never heard of this guy. And it was Jacob Bernoulli. And this is called the law of large numbers. So if you properly interpret the law of large numbers you arrive at this conclusion except one point. Except what point? It must be topological growth in the group. Now where geometry analysis enters. The categories work with topological categories. Besides errors, there is a concept of two spaces or two morphisms being close. And this need explanation. So you have to use the right topology. Okay? You have to use the right topology. And the way you use kind of, I cannot tell either the weakest or the strongest topology way it makes sense. The one which is the hardest to get. I never know if the weakest are hardest. This is exactly the problem with the science. I just never know. And so then from that many kind of things follow that. So what is behind it? And so to get some feeling of topology I have to consider an example which for me is kind of the source of an understanding of that. So to get some picture, some ideas of this category and of that, let me look at one particular class of examples. When you have spaces p of the type p i equals to p j. Well all n is i equal. And so what happens to our category? And so what is kind of the point of the theorem? Now so what will be errors here? So this will be just a bunch of numbers and say you have i, now I use hate indices. It was notation already and I don't want to invent u. You have n numbers and then you go to another one to equate when you have m numbers. All equal. So what will be such an error? It means that number n will be composed as m times another integer. So if you restrict this category to atoms of equal weight, what you have just the complicate of the composition of numbers. Of integers. So this category on one hand you see I was adding numbers. On the other hand here you see there is multiplication built in. So the whole arithmetic is in this category, which is kind of rather powerful. It says that you don't lose what you have. It's always there. And then this kind of semi-group will be here. It will be just all multiplicative rational numbers greater than one. Because here's an integer and then you normalize them by the common denominator and then what you have is just whole rational numbers. But if you do it here, if you do it in general you have kind of huge, kind of uncountable, some horrible space. If you don't put topology in. Right? Because it becomes kind of if you put here different irrational numbers which are linearly independent all in growth in the group they will be different. They will have no relation between them. They have to somehow bring them to glue things together also by topology. And now I want to bring a geometric example which kind of clarifies what happens. Because this kind of so far as abstract. Maybe just first of all maybe before doing that I explain what the law of large numbers has to do. Which was proven by Jacob Bernoulli in 170 something. He actually tried to prove it for about 20 years before he proved it. And I don't know actually what was his reasoning. It had already been conjectured by Cardano who conjectured the law of large numbers. I don't know how Bernoulli proven it. I think today's standard proof using Pythagorean theorem. Of course you know it follows from Pythagorean theorem. Like many other things. So one of the great Pythagorean theorem kind of one of the greatest theorem. And law of large numbers also not so better direct corollary of Pythagorean theorem and of course you know why it's Pythagorean theorem. Okay maybe I'll prove it to this Pythagorean theorem. Of course follows from that. But plus modern notation. And so how you get it. So what is here. And then you also will understand what is topology involved. So if you live in this category of finite measure spaces we observe the following cooperation. Two spaces you can multiply. You can take product of two spaces. And this very simple if this was built out of PI's and this out of QJ. This will be built of PI times QJ. And because the point is your number, total is someone, it's kind of perfect product. And here kind of you can think about this of course as a matrices. You have PI's here, QI there and you take this product here. And then once you can do that you can write which is kind of wrong P to the power n. Meaning P multiplied by P multiplied by P n times. It's wrong because it might be set and they put number. But very often on the cardinality of set metals. So in truth you write here number rather than cardinality of this number. And again when you start analyzing it deeper you see you can't write the number. And it is you need this set because transformation operation with sets implies certain operation of this power. And I actually described this kind of way of thinking rather recently when you're solving very specific questions when you're completely lost if you put the number. But I write the number. So meaning you multiply it by itself many times. You see what's wrong with that because I use notation on the blackboard. I write on the blackboard. If I have no right I cannot write in the sequence. And that's what's wrong with that. We want to have our mathematical description of mathematics free of a blackboard. And it's impossible. No, not joke about it. It's impossible. Literally it's impossible. We make some convention all the time. And you say how can we verify mathematical computers. But this depends of course kind of rigid structure of computers. And sometimes they fail. And it's a kind of great miracle by a mathematics still there. Excuse me. We don't need induction. I don't know what induction is. For me all this logic I don't know what induction is. For me it's just empty words. I'm sorry. For me induction is empty word. It's empty word. I mean induction. Because again depends how you write things on the blackboard. It's a language. Language depends how you write it down. I don't know what it is. We come back to languages. I'm saying all traditional description of mathematics in my view are greatly faulty. And that's the reason why there is no model of mathematics rather than computer. Why there is no model of understanding languages. Because we have absolutely wrong perception traditionally built in the development of numerical mathematics. And logic distorted our perception of ourselves and of mathematics and of languages. It's fully distorted. Like the same as speaking about sun going around the earth and making all this language describing it. It's just wrong. The sun doesn't go around the world. It's kind of a sheet. You can describe it. You can do lots of things. It's just wrong. It's wrong language. Wrong description. It's easy to say it's wrong. It's hard to say what's right. You can philosophically say there is no reason to believe who turns around whom. We around the sun around us unless we see other planets. When we have many planets we say it's more likely we rotate around the sun. But unfortunately here we don't have much planets to understand us. But of course in correctness or rather inadequacy of the standard description of mathematics things are apparent. And that's we have no induction what I'm saying. Because again I don't know what induction is. Induction in mathematical logic is again a greatly questionable thing. Like all these mathematical logic you know it's interesting thing. You may think mathematical logic is very careful discipline where you never make mistakes. And in mathematical text great mathematicians never made mistakes. And you see this one. However if you look at one of the founders of mathematical logic of Freggan for example when he wrote his main book and the pub incended to Russell. Russell Middle is a symbol of sheer nonsense. And Russell wrote his text and then Girdle was reading Russell and saying every line was a mistake. Mistake or mistake was a mistake. And I think what Girdle says also is wrong or wrong in a different way. Logic is absolutely on rigorous sides. It is absolutely pure fantasy. Mathematical logic. Of course as mathematical theory like model theory, set theory it's okay. Then mathematical theory we accept them. But when logic says I'm foundation of logical I'm correct it's nonsense. History of that shows it just always was wrong. Because it's not reasonable. It's illusion. In mathematics it's something else which is of course naively used logic but as a great thing you shouldn't use it as set theory. As naive set theory it's beautiful language. When you go next levels within itself it's a good science. As a language it loses ground and it's taken over by categories. Which for some reason better. But anyway what you have you have this product. And so now if you have originally say to the power, so it's p to the power n and my notation a little bit maybe confusing p to the power n and say originally this space was conceding p i and i was writing 1 to say k. And then become big space which has big to the n. Elements are huge space and each probability of each event become absolutely small. On the other hand it's exactly with what you have to deal in physics where you observe events after events you have this blinking light and blinks hundreds of times. So probability of any configuration of blinking becomes 10 to the minus 100. Incredibly small numbers so you cannot explicitly make computations. For that reason you need formula and for that reason kind of without thinking of course Boyce will have his formula. Fees immediately invent formulas which are good for accounting for making definite results. And here in mathematics we don't have to do it prematurely. But so what about this space? The point is, so there are these, as I mentioned before there are very special states where all n's are equal and they kind of represented just by numbers. And their category correspond to multiplication table. So just multiplication table. And by the law of large numbers this space p to the n in some correct sense converges to this called a homogeneous space. It's asymptotically homogeneous. So when n is large all n's become approximately equal. And this exactly the law of large numbers. And if you decipher this will be the right topology where the definition which I gave become correct. And what sense approximately? And you have to just remember what the law of large numbers tells you. In a second I will explain this to you. And the moment you have it this definition has the following power. That it tells you not only this abstract statement with this isomorphism but it says that if you consider inside of this big category this very small category where all objects just atoms equal weight and morphism between them just product of numbers. That this category dance inside. Therefore any property which continues with respect to the topology I want to describe and most of them obviously continues. If it's true for constant for this kind of simple sets. This becomes maps between sets. All these ways disappear. And if something is true there it also true for general category. So many theorems become immediately apparent because of this density property. It's not only slightly more than that. Bernoulli theorem if you look at the logic of that tells you not only the space is being approximated. But if you have a put here this morphism become approximated by just multiplication of numbers. And so this I want to explain in some example how you can derive from that rather non-trivial properties of geometric objects. And this for me was someone surprising I just realized the power of this NTP. Before just a formula you see I remember just knowing this formula I swallowed it. It's just a formula. From the genre of formula it's just a formula. Completely meaningless you know. And then interesting enough by the way this is a problem of modern of course computers I needed at some moment in particular inequality about NTP. It's kind of trivially following from other ones and I couldn't find it in the literature. I took all text books on NTP and none of them was doing that. Because all the repeating only what those was written on one of the first I think books or articles by Rocklin and they were copying and pasting, copying and pasting, copying and pasting. Nobody ever of the authors tried to think what NTP was. They believed this right definition just make copies. And this happens by the way very often in science. You make copies this way I'm saying history is useful because you can realize what we have is just mistaken perception. Now so what's the example I want to consider and this is close to my heart. Geometry is called isoparametric inequality. So I'm saying that isoparametric inequality follows from functionality of the NTP. This is from factorial definition just from definition from the law of large numbers essentially. So what is isoparametric inequality? I say in three space. So in three space it says we have a domain and say omega then volume of omega is greater or equal to area of its boundary and you have to put the right exponent. Here you have to take say I think in this form up to a constant universal constant depending on dimension here dimension 3 is up constant. But for the moment I'm not very much concerned with the constant though I just realized there is a problem with correct constant it's sometimes very important to have a right constant so this inequality. I'm just saying it if you apply this logic of this being almost constant then it would this statement which I said that then it would imply this isoparametric inequality. It's kind of amazing because it looks geometric inequality you think you have to do something. But amazingly after you need to very little geometry not geometry inside. The law of large numbers takes care of geometry which I think is quite amazing. So this is kind of kind of non-trivial inequality you know yes it's even already in dimension 3 it all proves require some idea. Because the trouble is in dimension 2 of course they all obviously you have a short curve to just count integrate this area anyway and so of course area less the length right because you integrate twice. Of course sharp inequality requires more effort. But in dimension in high dimension what may happen you may have these domains you know very long narrow fingers carry inside very little area and who knows they may carry lots of volume this doesn't happen but this you have to prove. So there is something to prove. This concerns what I'm speaking non-sharp inequality and this is what is relevant for most of analysis so-called sobering of inequality all the sobering of inequality is a trivial corollary of this inequality. By the way there is one inequality which doesn't fall immediately from them it's called log sobering of inequality interestingly enough we shall prove this entropy will be this log sobering even stronger up to a constant. The constant is another issue. By the way nowadays there is no kind of good proof, good meaning yes by formulas of the isoperemetic inequality of sharp inequality amazingly enough such proof exists in dimension 2 it exists in dimension 4 and that's it. There is a formula from which is apparent this inequality with a sharp constant is apparent in dimension 2 and 4 not even in dimension 3 not to speak about other dimensions and it's hard well it's hard to correctly formulate such as impossible high dimensions because it is what do you mean by formulas and the formula is very simple in the case of 2 and 4 but if not sharp inequality the proof which I give will be not by formulas but will follow from this functionality and so let's prove it. First the point is we reformulate it in very general terms so it will be amenable to what we do and then it will be the following thing I have a measure space x and I take x cross x cross x and the examples in question x is a real line however in the course of the proof I will have to change it so it might be just measure space and later on I explain why I don't like measure space but on the other hand it's all good measure space is trouble with measure space there is no such thing as measure space contradiction in terms space space with measure abstract measures are not sets the object of category which is not category of sets and usually all exposition which you find in textbooks on measure theory all wrong mistakes were mistake were mistake they are not rigorous they say how we believe something up to 1.0 and this is not rigorously we want to say rigorously we need the whole body of that millifrancule theory which includes sets bigger than continuum which never you do and of course also pretty certain usually full of mistakes and nobody with real numbers also nobody ever wrote down rigorous rigorous foundation of real numbers all known exposition have faults we still believe it's okay but you know about rigor you might be careful okay but so but anyway say given that and then the subset there and then you projected so put it one two three they don't have to be the same space and it's not actually much easier and you have three reduction of that you have omega one two x one times x two and similarly omega two one three in x one times and omega two three and x two three and this was a really good setting for for for entropy so you have and this is one way to think about entropy in physical context is as follows very close to what we consider that we have a space big space mass space which is a product of x i maybe finally infinite and then we have some subset omega and then we projected observe I cannot see if I write x to the n I cannot even write it down which people sometimes doing combinatorical it exposes just to write it down and you never can read it after that it's very very essential that your space is a product of not one two three but any set and then take subset and then otherwise have double indices people in this is having a horrible mess because but again it sets theoretical rotation they're still much more primitive than categorical in this so so you have projections and something is about the measures of this projection how they relate to measure of this set so in a second we see that it's entropy which matters actually it's about entropy and so the theorem says if I take measures of this I put a kind of brackets so it will not confusing take omega one two meaning measure of that times omega two three times omega one one three and two three and this is greater or equal then omega squared to read this inequality and this is a first I shall prove it and explain but it has to be entropy and we change in equality this will be the end of the lecture today so this is called Lumi's Whitney theorem and it is on one hand it doesn't give you the shop constant but it is in a way stronger than isopermatic inequality because if you apply it to the Euclidean space coming back to the Euclidean space when you have omega in R3 and have projection to three plane there are three coordinate projection to R2 one two R2 one three and R2 two three they have domains here domain here domain here and you say huh this volume squared by product of these areas and this is stronger because by geometric arithmetic mean this give you this quantity is smaller with a proper normalization the sum of this therefore you bound this volume by the sum of this projection but when you consider anything with domain and project it here of course this projection smaller than area area goes down and so up to factor of three you see this volume is less than area of course you have to properly normalize have a right imagination it's again typically you don't have to write the formula you say in principle the reason equality and automatically you write the formula and because formulas are shorter people write formulas and then they of course have to decipher them back right and so this is a stronger modular geometric mean so it's for certain configuration is much better than iso-parametric inequality and so all you use this projection of geometry what you use this projection area goes down and this area is smaller than image of the projection area of the boundary is greater than area of the projection which is kind of obvious geometry but still this is the only geometric gradient in this argument and the rest is this formula abstract theorem and this is a Glimmer-Sweetian theorem and this of course is true for any dimension just three because in the first case when it is meaningless interesting right so how you prove that so just say two words and then you come back to that next time of course you can realize first it's kind of a problem you can imagine this being finite set this being just finite set this subset and you have this about cardinalities so it's a combinatorial theorem anyway but it doesn't help I say measure I could say equally just finite sets and this means cardinality how you prove that so you have this projection and you use kind of naturally what is called Fubini theorem or whatever I don't know you just evaluate this by integrating this height you intersect this vertical lines you have this domain you integrate it and so you have some equality but the trouble is this kind of thing is variable so it's implicitly this function involved how these things varies but imagine it would be constant then it's very nice you know that the total volume equals this times this this height so if you write it to all three projections you immediately have your inequality just become A plus B equals C or something yeah I will reproduce it next time but if you do it yourself you see immediately if you assume that you said you had this property this projection on all three directions all these heights they non-zero are equal see some maybe zero this because you have inequality rather than equality and inequality means it sounds like a secret zero they don't appear kind of in the picture if you write this inequality you immediately arrive at this inequality so the trouble is of course they're far from being equal they're all like the domain you project it and of course all this intersection are different right but then I'm saying aha and now we apply the law of large numbers but we apply it not to your set omega but we apply it put here number n number n number n number n the number is an infinitely large number here again it's very convenient to speak in the language of non-standard analysis so they have a very very big number pretend they kind of and then everything which is small compared to this disappears non-standard analysis of course it's just the language but it doesn't have much depth but still very convenient so and then what I said when n has become very large I can imagine everything becomes constant in particular this function of this projection when you go to the limit it describes property of our morphism I would say all set all errors become everything becomes constant so the picture it uses essentially to the one which I described here and where it's obvious so I have this term again I will explain it in detail I will explain it in detail next time and this immediately also if you think about that this is a good example yes you can think for yourself before before listening to what the detail explanation that first why it is obvious when all this projection are equal and if they only equal up to epsilon there is epsilon error and goes to infinity error goes to zero and you have a result and this one point again that conceptual point is you don't have to stick to one space yeah you can allow anything here only this combinatorics of this product how it's organized relevant it's not intrinsic geometry of the Euclidean space however interestingly now the corresponding sharp inequality the corresponding sharp inequality in Euclidean space is unknown right so I shall discuss a little bit you don't know there is similar inequality which will be fully symmetric it's because this extreme configuration is a it's a cubia or rectangular solid probably it's actually a cubia or something it's not a ball and how to make argument when the ball will be extreme and there are partial results and the proof is rather sophisticated and it's unknown fully and though there are very closely related results to the channel so certain reformulation of that also quite powerful is true for the Euclidean space ok that's for today so next time I'll I'll repeat more or less what I started about this category and explain this in detail the formulas which I suppressed so far