 Привіт. Ритимував Міуською Ринжанкою, який є тут. Просто всі тут. Це про квоза-лінія параболику еквейсіону з рендом ФОЗ. Я вважаю цю квоза-лінія параболику еквейсіону як ціній деференційній еквейсіон у цьому фанкціональній спеціалі. Просто всі з мінімалу експерінги і ПДІs розуміють, що я говорив. Це квоза-лінія параболику еквейсіону в цьому фанкціональній спеціалі. Лінія пара еквейсіону, просто лаплажа, мені з 1. Це нонлініярит, який може розуміювати якісь лінія, не виявилося. І зараз тримається ритимував. Ритимувавав ритимував рендом параметр. Це ето омігофт. Ця ця активіта... Ось так, я так. І я просто декомпозуваю. Я збираю базу ФІДЖА, збираю ягоні функції оператора Л. Я декомпозуваю форс ІТА в цій базі. ІТА еквоза симміця БДЖА, ІТАДЖА, ІТАДЖА, ОмігофТ, ІТА ОмігофТ, ФІДЖА. Симміця від 1 до М, де М є файне чи інфініція. І форс є регуле,menu quantities B BJ2 is finite pre derivation is bounded. Beeker bus is broad. Має бути багато, ми буваємо, що ми можемо митись з shops, де ми будемо вимігати від висок, це вимігальний сон, і вимігальний був більше, бо, бачимо, як у якої який з'яснює таке, зі зі зору, з яких який з'яснює таке, ми буваємо багато, ми буваємо волім, ми буваємо бувлення тих кінця, тих бувінзів, щоб проявити, що ця еквіція буде почуватима. Але, якщо ця еквіція, з мисленого мисленого, не все це виявить, що якщо випятоваєшся з відецем, з відецем, з відецем, з відецем. Для багато важливих еквіцій з математичного фізика, це не може бути впевнені і може це просто це звичайно. Може бути найбільш така еквіція, це преміця еквіція атоматії, яка найпочива еквіція матеміця матеміця. Якщо випятовуєшся з мисленого мисленого, яке випятовуєшся з відецем, з мисленого мисленого, не знають, які еквіція випіваєшся з відецем. Але якщо мислене випіваєшся з відецем, звичайно, це якщо, звичайно – рівня випівається завжди. Це була навіть мотива. Можна бути певна, або я не пам'ятаю. Я розглянув який я доноцював з відецем, The right-hand side is restricted to the segment r-1, r-1r, where, of course, r is a natural number. Naturally, as usual, these are independent, identically distributed random processes in the real line, of course, bounded because of this condition, right? But also I assume that you see this is the process, this is the process restricted to short segment of length 1, second segment of length 1, third segment of length 1. I also assume that this random variable, this process also are independent, identically distributed, right? They for every omega, for every omega, this belongs to the space e. Where is the space e, where the forces sits? Space e is L2 space, defined on segment 0-1, valued in the space hm. And the space hm, ah, h is my space, h is my Hilbert space. And the space hm is the subspace of h generated, spanned by the first m basis vector, right? Because if the m is finite, that for any fixed t, eta, eta, t sits in the space hm, not in h. So this is where the force lives. Now, this assumption, this restriction allows me to move forward in time, to move forward in time in step 1. So this is segment first, segment second, segment third, segment fourth, segment. What does it mean? It means that I consider the map S, which maps phase space times the space where the forces sits, back to the phase space, right? And this map is the following. I take initial data, I take a force, eta-1, right? So this is on the first segment. And I send it to the solution at time 1, to the solution at time 1, right? So even if u0 was a constant, u1, of course, is a random variable, a random variable in the space h. And I simply have to iterate, simply have to iterate this, because, of course, u of t, of course, this is nothing but the map S applied to u of t and applied to this, to the kick number 2. So to study long-time behavior for solutions for my equation, I essentially have to study iterations of the map S, iterations of the map S. Very important observation, this is almost tautology. Almost tautology is the following observation. That's the law, that's the law of my solution at time k plus 1, at time k plus 1 is the, no, just notation. Notation is the following, that I will denote by uk of v, uk of v, and I call this system S. So when I consider, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no. Excuse me, excuse me. System S is the following one. I simply start to iterate the map S. So u of r plus 1 equals to S applied to u of r, Апліт, апліт і кіг-кіг-намба-р. Це для р, біга, біга вікул до зерова. Коли я інтерейтин, коли я інтерейтин, це м'як епс, С, коли я можу видати режектори з системи С, я вийшов, я вийшов до інфінітів з флори селюціону моєї еквейціону в інтеджа-таймс. В інтеджа-таймс. І я взагалі взагалі взагалі взагалі взагалі взагалі взагалі. Також, я вийшов до вікул, коли я взагалі взагалі взагалі Bowl. Це – кіг на системі С, з якою я вийшов до В. Також, мені ця цігна цігна цігна цігна цігна цігна. Татуж, я м'як епс, якщо я розумію, я вийшов, я вийшов, я вийшов до lingering, біга вийшов до вікул. Бо цих облігів, коли кіле випадає. Кіле випадає, так. Так, так. Так, дуже важлива ця проєкт, що почти еотологію, є фолиєм. Тому що ця власть в салючене, коли кіле випадає, може бути в цій фолиєм. Я випадаю салючене, я випадаю салючене, і я розумію, як випадаю, з салючене, з якою, з якою випадаю. Це власть в салючене, коли кіле випадає, коли кіле випадає. Так, так, я збирай, так, так, так. Для ця власть в салючене. Зараз, збирай, я збирай, це для уваження всіх. Так, так, що це не випадає? Ви випадає, я випадаю, в вилючене, Зараз brute evolution of the measure depends not on the specific choice, of the Ita-1, Ita-2, Ita-3, but only on the law, on the distribution of every kick and on the fact that they are independent. Very much in line with what Andrei Groshyow was talking about. Now I impose the number of properties of the equation, і, в принципі, на безкошкорі. У них висб précédений вікошкор і в сильний вікошкор, але взагалі, мово б. Ви ванували ефіранець, ефіранець спеціальний ефіранець, вákував у нічого не в спеціалний ефіранець, але в мово спеціалний ефіранець. У кількості Ві, якими цього глибові і ефіранць, і компактно випадують в цей спеціальний рівень. Для Педаїзи, цей спеціальний рівень – 1 см. І цей рівень – ці-2 см. Ці-2 см – або саме більше делікає результат. Це аналітика. Це Б-1. Б-2 – це... Але це завжди струфа квоза линія параболіки. Пропоти Б-2 – це також не все рестрективно. Це також стабілитима зеру, або просто дисепатівитима. Пропоти Б-2 – це 2 пункту. 2 пункту. А – це норма С-У-іта. Це відбуває до гамма, в тому часу, норма У-біта. У яких гамма smaller than 1, це не констант. І це відбуває до гамма U-біта і відбуває до гамма I-та від гамма L. Також, гамма L – гамма L – який відбуває до гамма K-біта. Одна з моїх субтитутів – гамма K-біта – гамма K-біта – гамма H – той тому, Titanium A is a treaties.她 jam is a treaties. She jam is a really treaties. She – she-a treaties. А я також zus coupon. З'яснюв Vatican It implies A but I prefer to have one on the blackboard, both of them. This is B2. Finally property B3 is a trick. So to say, which was the reason for this cool. The property B3 and B3 prime shows close relations between this problem and just optimal control. Property number 3 exists in two forms, in the Vyka form B3 у цьому вікуромligі цього вклюності вівня. У принципу, у ефіті ета – який випадав війни до ефіті ета якщо я�시, як міг на ефіті ета її війни до ефіті ета, то представити'Den's image в цьому ефіті ета – Це відпочатку контролабілістів для лінірістів. Це не важлива діяль. Це не для того, що цього лінірі та машиннірі, що улавно Грачову говорити, треба. Але, воно, ця сіплість тіпикально вивив, як і м, якщо м випадається в інфініті. Як і м випадається, ця сіплість випадається внушко. ця properties is wrong, але what is true is the weaker form of this property and the three properties is that for so many u in H. They exist, they exist, they exist as measurable subset in the space, in the space of the, in the space of the, також це має бути межа-1, і це працює до Форені-U і до Форені-Ітта, до Форені-Ітта від КЕУ. І цей факт, що просто з цієї межі, з цієї межі-0. Б-2 – це фолені. Б-2 – це фолені. Найдемо діфінювати рста. Найдемо діфінювати рста. С, С-біта, діфінювати 1 минус гаму. 1 минус гаму. Зараз, це є челдішний ексерсайс, що був в цей спеціальній радіус рста. Це варіант для систем С. Це вважає, що як в системі, я втягив В від цього буву, і як ЄТА від цього спеціальній радіус рста в цій спеціальній радіус рста. Це дуже легко бачити. Але на тих бувах, на тих бувах, з цієї межі, з цієї межі-1, якщо Ви від цього буву, Мови, як В rRSTA та на SW kita відеча, то в спеціальній радіус Р засжі kynae запознила компактування V. It means that in fact our trajectory stays not only in the ball, BH r star in this space H but also in the ball B of some radius k in the space V. This guy is not closed, let us consider the closure. Так, нашу троєкторі залишається в цьому кложі. Давайте знаймемо цю кложі X. Але це компакт в цьому х, в цьому х, в цьому х. Так, в факті, наші системи не в цьому х, це в цьому х, в цьому х, в цьому х. Це бачить багато, але це параболік. Це не параболік лінія динаміки. Так, або початку цим системам, ми початкуємо лише реструцію цим системам, в цьому компактах. Це дуже легко вистосувати, дуже легко вистосувати різальними, які випадуть в цьому х, в цьому х, в цьому х. Це дуже легко, я не буду говорити про це. У embracing, щеheartelessness, я VI for everytime, means is positive, this note is positive, positive is constant, but the bow and after you get down to this note which you can get experience from sometimes is positive. Те, що в цій позитивній константинцій, також, якщо у多шній унінічіалі датей, U0, U0' що вона від нашого компакту С, це відповідь щось відговарує. Я взагалі маю смачну кольцію у Україні з унінічіалі датей U0, я взагалі маю луни цю салюсію, я взагалі маю салюсію з унінічіалі датей U0' я взагалі маю луни цю салюсію. це два міжчині, що в цьому спеціалі екс. Я можу сказати, що дистанція між два міжчині і дистанція, яка вийшла більше, це ліпчинська дістанція, яка була діфінація останній час. Дістанція ліпчинська дістанція деки експоненційної квіки. Вона відбувається з С капу до дігрі К дістанція між початку U0-U0'-U0' такий феймовий короллери з цього мейнсерем, який я не був випадав, але сказав, що це не дуже важливий, не дуже важливий експерсайс, і в цьому ми можемо висправити в нашій паціііі, чи просто іншу. Короллери – це фоли. Вони є уникними. Вони є уникні. Межа МЮ-СТА, яка є в межі в моїх компактах, компакт Х, також також також, в гисто- pearl velocidade В. Яка є униканні в гисто-藏 gloce X, collagen Acito W since WLa minecto WB Wappa Cultural Era White W,riel Emma I Head.tech for our system. Традійніkrinhoyorі way, the most efficient way to prove this exponential mixing is by the first proving my theorem. This is how people usually prove this. People usually prove this. This shows that in the terms of distribution solution for our equation forgets the past exponentially fast. This is postulated in quite of lot of physical theory. As I said for example in the series of turbulence. They simply, in the most rigorous text, in the page number five of the introduction, they will mention this and they will proceed. Right. Now, this is the theorem. Now, the most interesting, so the most interesting is to explain how the theorem works. The theorem works, the theorem is based on the beautiful idea of Dioblin. Ok, this is the called method of two equations. Let us, you see, this is a dynamical system, it is a system in the space X. Let us double our space. Let us consider a system in the space X times X. Let us consider in this system, in this system trajectory U of K V of K. This trajectory U of K V of K is a solution of the following system of equation, equation number three. U of K, U of K V of K is obtained by V of K minus one. V of K minus one, U of K minus one, in the following way. I take map S, the same map S, I apply to initial data U K minus one eta K, eta K, right, eta K. I will explain in a second who is eta K and S and V of K is S of V of K minus one, K minus one eta prime K, eta prime K. Who are U K, U K, eta prime K? The pair U K, eta prime K, so the law of eta K equals to L. The law of eta prime K also equals to L. So these two random variables is any coupling for the pair of measure LL. And this coupling depends only on U and V, only on U K minus one, V K minus one, okay. So this U K is, excuse me, eta K of omega, depending only on U V, U K minus one, V K minus one, right. And eta prime K, eta prime K also depends only on omega and only on U K minus one, V K minus one, V K minus one, right. We consider two trajectories with two initial data, but we coupled them. We coupled them. This is method of coupling. This was absolutely brilliant idea of Dublin. Then the lemma, which was explained last time, but by the way, which is very easy to prove, then yes, the law is the same. The law of U of K equals to the law of U K subindex U0. The law of V of K equals to the law of U of K subindex U0 prime, you see. So that is to say, that is to say, that to prove this convergence, I can replace here U K of U0 by U of K, U K of U0 prime by V of K. Now what I'm doing? Now I will try to do this in such a way that the convergence will become obvious. This is, you see, when Dublin invented this, it was really miracle because he considered easy equation. And then, indeed, he immediately produced the coupling such that this became obvious. You see, what is, in fact, what is not completely obvious, what is not completely obvious is this fact. But this is true, not hard to prove. It would not, you should, well, to guess it, you should really understand well what is happening here, right. Okay. So now, how we construct this coupling such that for this trajectory, this convergence is obvious. Well, okay, let us do this. Let us, let us, let us, let us, we do this in two steps. First, we will construct almost what we will need, and then we will improve what we have constructed. So, so let us, let us denote, let us denote U equals to U of K, V equals to V of K, and I will now, I will now explain your transition from K to K plus 1. Let us, let us denote, let us denote by D, distance between U and V, U and V. The first case, which we, which we will consider is that this distance is small. D is smaller than delta, where delta is small parameter, which I will specify below. So, what happens if the starting point, if the starting point of this trajectory, two points are very close. The following happens, the following happens. Look, ah, okay, let us, let us denote by, it means that, it means that U of V belongs to delta, and we've seen it of the diagonal, belongs to the pairs of U and V from K times H, such that the distance, distance is bounded by delta, distance is bounded by delta. Right. So, now, first, first, I will construct this coupling, this coupling eta K, eta K prime in the following special way. I will take eta K equals to eta of omega, eta of omega. If eta of omega is independent of absolutely everything, random variable, which has the low L, which has the low L, right? And I will, and I will take, and I will take, and I will take for eta prime, eta prime K, small correction of this random variable. I will, I will take for eta prime K, I will take eta plus the function, which depends on U and V, of course, apply to eta, apply to eta. You see what happens? You see? I take, I take here, I take S before, essentially S before, but, but for the second equation, I, this control, this control, you see, I take for equation stupid control, right, just any, out of blue. But for second equation, I just construct control, construct. Now I will explain how, how, how I'm doing this. Firstly, let us, let us denote this guy, this guy, zeta. Let us denote this guy zeta. Then now watch. Then, then the next, then the next pair, then UK plus one. If so, then UK plus one, or VK plus one. VK plus one is S of V eta plus zeta, eta plus zeta, right? Because this is eta plus zeta, eta plus zeta, right? This is, this is, this is VK plus one. This is UK plus one. This is UK plus one, right? I have to consider, I have to consider their distance. I have to, I have to estimate their distance. But the map is smooth. The map is too smooth. Let us use Taylor formula up to, up to, up to order two. Up to, this is, this precisely KM logic. This is, this is what's starting from. I is a Newton or Kantorovich people, the people we are doing. Okay. This is, let us write, let us write Taylor formula up to order two. This is differential of U, differential in U of S evaluated at UV. Apply to increment of U, V times U. Plus differential of eta, S evaluated at U eta. Apply to zeta, right? Because increment here, from here to here is V minus U. Increment from here to here is zeta. Plus what is left? But what is left is a quadratic, is a quadratic correction. Plus V minus U squared, which is, which is nothing but, nothing but d squared, right? And plus norm zeta in the space, in the space E squared. In the space, in the space E squared, right? This is, this is, this is, this is, this is, this is sharp. This is sharp, this is sharp, right? Right, and now, and now look. And now to achieve just, I probably, first, first, no, no, in U, in U. Ah, thank you, thank you, thank you. Right, thanks, thanks, thanks, yes, yes. Thank you, thank you. Okay. Then now look, then now my dream, my dream is the following. My dream is the following. My dream is, let us call, let us call this dream double star. My dream that the distance between, between V k plus 1 minus, minus U of k is smaller than half of d. Half of d, right? This is my dream. This is my dream. Looks ridiculous, but no, look. When we look here, we see that this is, this is not, not at all impossible. Because this guy, this guy is very, very small compared to d, right? So, if I, if I, if I manage to make zero, this first term, then I will be close to the realization of my dream. Okay. Excuse me. Yes, yes, yes, yes, yes, yes. Yes, you're right, you're right. K plus 1. K plus 1, K plus 1, right? K plus 1. Yes. And then here also, yes, yes, thank you, thank you. So look, so then, so then to achieve, to achieve my dream, to achieve my dream. Let us consider the following homological equation. Homological equation, this is, this is, this is the slang from the KM theory. It is precisely equation, which I have to solve to make this guy zero. This is equation, who is unknown here. Unknown here is zeta. What is the equation of zeta such, on zeta such that this guy will be zero? Well, of course, this is this equation. D eta s, evaluated of u eta, applied to zeta, equals to minus d u s, d u s, evaluated at u eta, u eta, applied to, applied to v minus u. I will call the right-hand side, I will call the right-hand side f. So this is homological equation. I have to solve, or at least I have to start this homological equation. What we know about this? We know two things. First, by, by the assumption b1, by the assumption b1, f, f belongs in this, to the space v, to the smoother space v. And the norm f in this smoother space v is bounded by d, is bounded by d, up to constant, right? Up to constant, right? This is smooth, this is smooth. By the regularity property, by the regularity property b3, and now I will assume it in the stronger form. I will explain later what happens if we have it in the weaker form. So by the regularity property b3, d eta, so this map, which I have here, d eta fs, has dense image, has dense image, has dense image, has dense image. Now the fact, now the fact from this, from the huge, from the huge field of mathematics, which is called ill-post problem. So this equation can be solved, can be solved approximately. And this is, in this case, this is just linear algebra, linear algebra, linear algebra, and linear algebra tells us the following. That this equation has, for any epsilon, this equation has an epsilon solution. What does it mean? For any epsilon bigger than zero, there exists an epsilon, which is a natural number, which is a natural number, and there exists an almost inverse operator, almost inverse operator, an epsilon, which, of course, depends on, depends on u eta, which maps, which maps are e e to h, which maps e to h, almost inverse operator. What does it mean that it is almost inverse? First, that it is bounded. The norm of this operator, norm of this operator, is bounded by some constant, which depends only on the epsilon. Norm is operator exactly as operator from this space to this space. Secondly, this operator is finite dimensional, finite dimensional. Image of the operator r epsilon belongs to e n epsilon, which is Galerkin subspace. Oops, h epsilon, no, no, no, no, h to e, h to e, h to e, h to e, sorry. Because this guy, because d eta, d eta s, this is the mapping e to h, e to h. Inverse operator and almost inverse is the mapping h to e. So in the space e, I take any basis, any at all, and then I take Galerkin subspace of dimension n epsilon, and this, and the image, and the range of this image of the operator belongs to this Galerkin subspace. And this third relation, this is the fact that this is really epsilon inverse. That is to say that d eta s composed with r epsilon, r epsilon of f minus f norm in the space e, bounded by exactly epsilon, but now watch norm of f in the space v. Without this v, this is wrong. We are paying for it in infinite dimension of our problem. With v, with v it is correct. With v it is correct. Right? So now look, so now if we look into these two terms, if we add to these two terms disparity, which comes from the fact that solution is not exact, we will see that if I have choose, if I have choose zeta equals to minus r epsilon times du times du times du s of u eta applied to v minus eta, and this is, I will denote it as an operator, operator f, which depends on u eta and which is being applied to eta. Right? If I do this, if I do this, then observation. Easy observation, a bit annoying observation. Adjusting, adjusting, adjusting, adjusting epsilon. Epsilon, I see that, I can, I see that, I see that, I see that. This guy in the right hand side is smaller than half of d. Is smaller than half of d. I see that, I see that, I see that the norm of the right hand side is smaller than half of d. So this v k plus 1, minus u k plus 1 is smaller than half of d. Is smaller than half of d. So we have achieved our goal. Ми маємо чіпт в нашу голову, але ми збили щось, що ми збили. Ми збили фолиєнт, що, звичайно, ліно в цій рендом вирібі – л, але не в цій рендом вирібі, але не в цій рендом вирібі. Це не л. Це має бути відправлено. Так, зараз наш проблема – це ліно вирібі ліно в цій рендом вирібі, який маємо, що, як ліно вирібі – це не л. Так, що бачити? Ви збили, що ліно вирібі – ліно в цій рендом вирібі, це як Ліонг, це як Лtz, це як гнwealth, це як ліно вирібі. Тому, коли вирібі – це як ліонг. Те, що��и максимум根 hmm, Yo,τι Eden. Due to 2, due to 2, due to this, we have that. Distance between measures L' and L and the variational distance is bounded by C1. It's bounded by C epsilon d. It strongly depends on the precision, which we need. But epsilon is fixed. So now this is simply C times d. This is simply C times d, okay? Okay, so this is what we have achieved. Now this is what I explained last time. And now comes 2 new steps. Both are really beautiful. Step number 1, which is called Dabrushan lemma. Dabrushan lemma. Dabrushan lemma. Dabrushan lemma. The approach of Dublin was not used in Moscow and just in Russia for till 60s. And in 60s it was Dabrushan who started to use equivalent approach. Equivalent approach, I do not know if he read Dublin or not, but he used different language and different mentality. The key observation for his approach is the lemma, which of course, as very often happens, this is not him who first proved the fact, which I will tell you now. But he systematically used this lemma. So many people call it now Dabrushan lemma. Okay, so then due to double i, who is double i is a good question. Double i is... Dabrushan lemma. No, no, no, just I need something. Yes, yes, yes. It's okay. I have to tell who is i and who is double i. So to summarize, we have two properties. First, this S of u eta minus S of v eta prime is bounded by half of d for every omega. Property double i is that l minus l prime, variational distance, is bounded by c1 times d, time-time distance between u and v. And then Dabrushan lemma. Due to the second relation, there exists a coupling. There exist a coupling. There are random variables. Random variables eta prime, eta prime theta, such that this is coupling for the pair of measures ll prime. So such that the law of measure of random variable eta prime equals to l. The law of random variables eta prime eta tilde prime equals to l prime, right? First and second b, probability that eta tilde does not equal to eta tilde prime is exactly variation, not bounded, just equals. l minus l prime variational, and which is small, which is bounded by c1 d. So you see philosophically what happens. So it is not a big deal that the law of random variable eta prime is not just l. Since variational distance is for d, it only what remains, I have to cleverly change random variable eta prime on this set of small measures, on the set of omega of small measures, and then I will recover and then I will obtain a random variable of the right form. Since we are here, then it's a very useful definition. So this eta tilde, eta tilde prime is a coupling for the measures l l prime. Due to property b, the pair eta prime, eta tilde prime, precisely with this relation, is called the maximal coupling. The maximal coupling. The maximal coupling. So this is the best in the world. This is the best in the world. Look, if you start to think it's really very, very, very useful. So we represent two measures as two random variables. Then what turned out? If these two measures are close in variational distance, then I can find these two variables such that they coincide with very high probability. With very high probability, right? And this is Dioblin's mentality. Okay. Now watch. Now last big step here. Now last big idea, which works here. Which came from the theory of optimal mass transport. We have random variable. We have the law of random variable. It's law. What we have? We have random variable eta. It's law is how it should be l. We have random variable eta prime. It's law is l prime, not l. We have random variable eta tilde prime. It's law is again l prime. We have again random variable eta tilde. It's law is l. It's law is l, right? Now look. Now look. If I somehow, now what is the logic? What is the logic? Philosophically, but just on the level of intuition. On the level of intuition. If I can glue this random variable with this one, then eta tilde will be precisely second coupling, which I need. What does it mean exactly? Exactly it means glue in lemma. Exactly it means glue in lemma. Glue in lemma. And very good reference is the book by Villani. So to say, the book of Villani. He has a number, but the book is Topics in Optimal Transportation. Topics in Optimal Transportation. Topics in Optimal Transportation. So this is his book, Transportation. Despite he very well may be the second mayor of Paris. I know that he is thinking about second edition of this book. Okay, so glue in lemma. Glue in lemma. So in view. In view of star. Who is star? Let us look here. Who is star? Ah, this is star. This is star. Okay, so this in view of star. There exist triple of random variables. There exist triple of random variables. There exist random variables. Zeta 1, zeta 2, zeta 3. Of course, since all these three guys depends on UV. This triple also depends on UV. Such that the law of the pair zeta 1 zeta 2 equals to the law of the pair eta eta prime. The law of zeta zeta 1 equals to the law of eta eta prime. In such that the law of zeta 2 zeta 3 equals to the law of eta 2 prime eta 2 prime eta tilde prime eta tilde. You see what happens. I indeed, I have glued these two random variables to single random variable zeta 2. The proof in the book by Villanis is just extremely short. Something like half a page. Half a page, right? But he has the good historical comments and this is just a dramatic story how the series was proved. So not at all. It was not at all obvious. Okay, but now let me show. Okay, now what I want to do. Now I want, now I will check. I will take for eta k plus 1 equals to zeta 1 for eta prime of k plus 1 equals to zeta 3, not 2 zeta 3. Look, consequence from this, consequence from this, of course, that the law of zeta 1 equals to the law of eta equals to l. And also that the law of zeta 3 equals to the law of eta tilde also equals to l. So these two random variables, this is what we need. This is a coupling for the pair of measures ll. So let us take for the kick k plus 1 zeta 1. Let us take for the kick eta prime k plus 1 zeta 3. Then now look. Okay, so this is, I can do it because I can do it because this is a coupling for my measures and this coupling, since all my construction depends only U and V, then of course this is a coupling which also depends only on U and V, right? So let us check that this coupling, that this coupling possess good properties. Possess good properties. By the, by I, just let us look here. Let us look here. Let us rewrite this relation here once again. S of U eta minus S of V eta prime is bounded by half of D and this is true for all omega. This is true for all omega, right? Now let us write, let us write below S of U zeta 1 minus S of V zeta 2. Look, the law of this pair is the same as the law of this of this pair. Therefore, this set of these pairs of all pairs which I have here, all collection of all pairs eta eta prime can be simply reparametrized as a collection of all pairs zeta zeta 1. So this is the same set of differences. Same set of differences. Therefore, it is smaller than half of D. It is smaller than half of D, right? So just this is the first fact. This is the first fact. This is double star. This is double star. And then, similar, similar, similar by B in the Debrush & Lemma. Similar by B in the Debrush & Lemma. Yes, similar by B in the Debrush & Lemma. Similar by B in the Debrush & Lemma. I have triple star. I have triple star. Triple star is that probability that zeta 2 does not equal to zeta 3 equals to probability that eta tilde is different from eta tilde prime. And this is bounded by C1D. Bounded by C1D. Look, this is perfect. This is what I have achieved. You see that the distance between new random between trajectories that k equals to k plus 1 is twice smaller than the old distance, right? And this happens always apart from the set, apart from the set of small probability. This is another disparity. I have two disparities. First disparity sits here. Where is this disparity sits? Ah, yes. First disparity sits here. This is in the right-hand side. And second disparity sits in the fact that for the collection of omega of small probability everything is wrong, right? Now we are almost done. Now we are almost done. Now let us consider the event. Consider the event. And I closely follow my paper with Julian. Consider the event q, which is the collection of random parameters such that zeta2 equals to zeta3. This is a big event. Probability of q is bigger than 1 minus C1D. For small d this is something which is close to 1. Now finally I will define coupling. This is not quite like this. It is almost like this. Watch. Ita k, ita prime k, right? It's a random variable, which depends on omega. It equals 2. If omega belongs to this set, to this big set, I, of course, I take for them zeta1 zeta3, zeta3 omega of uv. Of uv, right? But if omega does not belong to the set q, I simply take ita prime k equals to ita k, which is just any random variable. No, no, no. Sorry, sorry, sorry. Ita prime equals to ita k equals to zeta1. So with high probability, the distance will decrease twice. And with low probability, the next right-hand side will be the same. But what have I achieved? Well, look, this is the summary of my achievement. They are really big. Look, if d is small, then for, with high probability, for omega belongs to q, I have that, this is 5. I have that distance between uk plus 1 and vk plus 1. And this is half, half of what is used to be. But if omega does not belong to this, to this, to this good set, to this good set, right? Then watch. Then the distance, then the distance of uk minus vk. This is simply distance between s and now watch. s of u eta k minus s of v eta k because the right-hand side is the same. You see this? But I am, see, it's not compact. I am smooth. So I am liptured. So this is bounded by, this is bounded by c times d. c times d. Perfect. Perfect. You see, with high probability, I am twice closer. With very low probability, I am constant bigger. Everything is under control. Everything is under control, right? So, yes. So now, now is the last. Now the last, that's just very good. So at what time should I, should I finish, I forgot? Excuse me. Well, okay, good, plenty of time. So I will go. So then, no rush, I will explain everything, what I have to explain. So look. So we have seen that indeed, indeed, if we start to construct, start to construct new, у Каїм в One, в Каїм вук в One. В В Каїм в Каїм вук в One. Якщо додай додай від цих відповідів, що буде маленький, то зробимо який ми маємо. А результат… Вони маємо, якщо додай не маленький. Ри не маленький. Розрозбачимося. Так. Ок, це що ми знаємо зі现在у. Звідом, це що ми знаємо зі сьогодні. Мож exceptions.�장INE 1 to the law emergency my You know OK role now i north금іщен simplify The distance is bigger than the delta. Then I simply take independent coupling. Simply take eta k, just eta k of omega. Eta prime k, just eta prime k of omega. І це два індепендантні вироби, також, що логом у кожен з них – л. Це таке індепендантні вироби. Це дуже натуральна. Зараз я покажу, що в цьому повітрі, немає стратегії потрібно. Ми маємо просто створюватися. Я покажу, що динаміка буде робити роботу для нас. Це дуже багато в лині. Зараз ми маємо роботу для нас. Це те, що ми робимо. Добавляємо, що це як фри еквейшні. Так, так, так, так. Залиште цю діпу. Залиште цю діпу. Залиште цю діпу. Залиште цю діпу. Залиште цю діпу. Залиште цю діпу. Залиште цю діпу. Залиште цю діпу. Залиште цю діпу. Залиште цю діпу. Залиште цю діпу. Залиште цю діпу. Залиште цю діпу. Залиште цю діпу. Залиште цю діпу. Залиште цю діпу. Залиште цю діпу. то вони відбувають до спорту майже К. Це означає, що для яких епсіалонів, з позитивною пробовістю, норму з кіг-іта є діляння епсіалонів. Так, з позитивною пробовістю, я можу зробити відбування так мовно, як я бачу. Це є... Це є один з цих. Найдемо, дивіться, К експіталі експіталі 1 1 plus gamma divided by 2. It is still smaller than 1, right? Now let us take, now assume, assume that, assume that, simply assume, why not? Assume that norm of u is bigger than half of delta. Delta is the same, delta is the same half of delta, right? Then, then by, by, then by star, who is star now? A, S of u. Okay, so, so this is star. Then by star, by star, there exist epsilon, there exist, there exist epsilon bigger than zero. Such that, now watch, such that the norm S of u, S of u eta, smaller than, smaller than a times norm u, if, if norm, if norm eta, if norm eta smaller than epsilon, smaller than epsilon. You see this? So with zero force, I'm shrinking gamma times, gamma times. If I want to shrink A times, I can allow, I can allow small force. But, but the probability of, of, of having, of having small force. Ah, so this, if force is sufficiently small, then I shrink, then I shrink A times. A times, right? Okay, if, so this is, this is double star, this is double star. But zero belongs to the support, support of, support of major k. There for probability of the event that eta, the norm of eta is smaller than epsilon. This is p sub index epsilon, which is something positive. This is something, which, which happens, which happens. Okay, now denote, denote, denote by rk, by rk, maximum, maximum of norm u of k. Norm v of k, norm v of k, right? And I'm interested in r of k and just r of k plus 1. What happens? How r of k depends, how r of k plus 1 depends on, depends on r of k. Now watch, one more thing, obviously, obviously, obviously, obviously. Since d, bigger than delta, then, then r, then rk, bigger than, bigger than half of delta. Half of delta, right? Because if both u and v are smaller than half of delta, then distance between u, u and v is, is smaller than delta. Is smaller than delta, yes? B, B, by, by the, by the property B2, by the property B2, rk plus 1, rk plus 1 is bounded by gamma times, gamma times rk plus beta. And this is true for every omega, for every omega, nice? So, so, what have I achieved? So, doing nothing, I have achieved the following. This is double shop, this is double shop. If, if d, bigger than delta, then, then rk is bigger than half of delta. And, and probability, probability that rk plus 1, rk plus 1 is bounded by i times, i times rk is, is, is bigger than, is bigger than p delta. Which is, which is, which is, which is something positive, yes? And besides, besides plus always, always we have, always we have, we have this, we have this relation, which I, which I maybe, which I call for example, for example 3 times, 3 times star, 3 times star. Now I have completely described to you my machine. I know how I will proceed Alia Doblin if two starting points are very close and if they are not very close. Let me draw systematical picture of the, of the, of the machine, of the machine which we have here. What we, what we have achieved. So this is the picture, this is the picture, picture like this. Transitions, transitions, transitions in the Markov chain, in the Doblin, in the Doblin Markov chain. In the Markov chain and Doblin Markov chain name was 3. It should be somewhere, this is dynamics, dynamics of the two points, of the two points. Look, watch. If at some time kd was smaller than delta, smaller than delta, right? Then with probability, probability which is bigger than 1 minus c1d, new distance, new distance is smaller than half of d, half of d. Very good, excellent, right, excellent, excellent, excellent. On the, in the, in the complementary event with probability which is smaller than c1d, the new distance, the new distance, the new distance is bounded by c times d, c times d. You see, I have, I have, so to say, I have choice between very good and not too bad. Very likely, something very good happens, rather unlikely, happens something which is not too bad. Now, now in terms of air. If air is very small, if air is smaller than half of delta, half of delta, then in fact d is smaller than delta, right? It should not be considered. Air cannot be too small, because then I am in the first case. But if air, if air is bigger than, bigger than half of delta, if air is bigger than half of delta, then with probability p delta, which is, which is bigger than 0, air nu, air nu shrinked, shrinked, is smaller than a times r, a times r. And with probability 1 minus p delta, p delta r nu, again, same story. It grows, but not dramatically. Bigger than gamma r, gamma r plus beta. It's bigger than gamma r plus beta. You see, you see, you see, looking at this, it is very likely that, that this machine, that this machine really, really gives me, really gives me the convergence, which is specified in this theorem. And this is, and this is really true, this is really true. Let me state, let me state it as a theorem, let me state it as a theorem. Due to, due to the table of transition, due to the table of transition, table of transitions, table of transitions, table of transitions, due to the table of transitions, IT. The Law of Solution of V of K minus Law of Solution of V of K is bounded by C times U0 minus U0CR times K Up to K How to prove this? Actually, you see? Actually, when you sit and start to think hard in one way or another, you prove this. Prove maybe, known and prove maybe short. For making it short, one has to use the method of Kantorovič Full nib. Kantorovič full nib. Kantorovič full nib. Actually this can be found in my book. у цьому керівці, який я відбував, і ця ускоряювала цю ефілю. Також, в цьому ефілю, ми продавали експоненціальну міксу у моєй керівці, з Арменом, з нашим ділянцем Андрею Пятницький, потім я був вважаю, як це було вихідно, і в моєй керівці, В 2002 році я прийшов, що цей метод в Кантеровичі функціоналі. Це просто аналізи моїй папер з ремонтарю. Я бачив, що я просто інвентував щось нове. Але, звичайно, нічого нове. Зараз я бачив, що Дабрушен, в яких цих останніх паперів, бачив, що мовляв, що мовляв, що мовляв, що мовляв, що мовляв, мовляв, мовляв. Такі усилення, що ця машина вийшов отримав perfectively well для стакастичних педій. Це таки й таке. Це у нас вона, це неможливо, і це дуже, дуже вийшово. Але як миinarно їчемо таку фото, як і це, якщо бачите, як у вас розумієте, якщо у вас розумієте, яка чи же? Якщо не може picked a question? Якщо не може так? Також, як в Кантеровичі функціоналі мовляв, інвентують шеп Brahly. Я не висоню, але це я сущайно був, я висоню. Ви не маєте, я тебе говорив, що вимінняє адеї, який вимінняє адеї. Я розповів, що в цій часі випадає 3, але в цій часі випадає 3, де я з'яснюваю контрольній серій. Ну, це легко. А зараз це легко розуміти. Супер я завжди виглядуваю також, я завжди виглядуваю також, але маю випадати більш цифістикатима лінія алжебра. Є лише діференція, що маю випадати більш цифістикатима лінія алжебра, бо зараз моя хомологічна випадка початиме більш комплікаційна. Так, зараз ми зробимо віднову віднову від нашої хомології.ologue就是 again that the homological equation. was the following, d' eta of S of U eta applied to the unknown vector zeta equals to F where F is so to say disparity which came from the construction Let us denote this operator just abbreviated as A subindex u eta. And we have, so I twice told you, okay, okay. Then this A subindex u eta. This is the mapping from the space E to the space H. And before, in the easy seton, in the easy seton B3 prime, we knew that the image of the linear operator A is dense. And then we had this beautiful linear algebra, linear algebra, linear algebra lemma, which allows us effectively construct, approximate inverse with any accuracy we wish. But now we know that, we now only know that A of eta has dense image, has dense image, has dense image. If eta belongs to good set, belongs to subset k of u, well, where L of k of u equals to 1, k of u equals to 1, right? Then we have simply modified and actually significantly more, significantly more involved lemma from linear algebra. So new lemma of linear algebra is the following, lemma, lemma. There exists, of course it is not at all unique, of course there is only existence. There exists, there exists approximate right inverse, r epsilon u eta, which is an operator on the space H to the space, to the Galerkin subspace n epsilon in the space E, such that, such that. Firstly, of course it is bounded. R epsilon, operator norm of R epsilon is bounded by some constant C epsilon. Secondly, it's finite dimensional range. Image of the operator R epsilon lies in the space E subindex n epsilon. То, just wait, wait, I am, no, no, no much, excuse me, I am a bit lost in my, in my own, in my own notations. Okay, then, no, excuse me. So for any, for any epsilon and epsilon 1 and epsilon 1, positive, positive, there exist, there exist, there exist a measurable subset of K of KU, subindex epsilon 1, which is a subset, subset of KU, such that, such that it's measure, K of U epsilon 1 is big, bigger than 1, minus epsilon 1, and everything holds, everything holds in this set. And, and for, and for, and for, yes, and for, and for eta from this set, and for eta from this set, K epsilon 1, K epsilon U. We have, we have that, we have precisely, we have precisely what we had before. Almost, that air, that, that we have exist, there exist, there exist operator. There exist operator air of epsilon, epsilon 1, which depends, which depends of U eta, which is an operator with finite dimensional, finite dimensional range, E subindex, E subindex and epsilon, such that, such that, first, norm of operator R epsilon is bounded by, is bounded by constant C, which depends on epsilon, epsilon 1, which range, it has finite dimensional range, and we have here the same, same property. So, that this is indeed, indeed epsilon inverse. So, A of U eta composition with, with epsilon inverse, with epsilon inverse apply to F minus F norm in the space E, it is bounded by, by epsilon, precisely epsilon. This, this is, this is tricky, this is somewhat, this is somewhat tricky. So, so proving this lemma costed us, costed us quite a lot of efforts. But, yes, excuse me, this is this V, this is V. This is, this I have look, I have, I have a mapping, so S, S maps H times E times E to H. But in fact, it, it maps to, to then subset V compactly, compactly embedded to H. So, so this, this is norm in E, this is in V. Here, here, F is, F is. Ah, yes, yes, yes, yes, yes, thank you, thank you, thank you, thank you, thank you, yes. Yes, yes. Yes, or just some, some mistake or no mistake. Ah, yes, yes, correct, correct, right, right, you're right, yes. Yes, so, so we lose smoothness, we lose, we lose smoothness here. Okay, so after this, after this scheme, which I explained to you, scheme, it was essentially invented for this, for this situation. This is the same, the same scheme works. What is really unpleasant is to prove, is to prove this lemma. Is to prove this lemma, and one of the difficulties is, by the way, same as with gluing lemma. Same as with gluing lemma, so you see, so what additional difficulty, which cost at us quite a lot of trouble, maybe because of lack of corresponding culture. That everything depends, depends on you eat, on you eat, right? And this dependence is rather tricky. It is by no means continuous. It should be measurable, it is enough for us. But the proof, measurability is really complicated. It is really complicated, so we need some statements, which did not exist in literature. Some, by the way, some, so some facts from the book of Willanie, which naturally they should be stated, stated for measurable maps. In the book there, they were stated by, by continuous. I have, I have sent him a letter asking, and what about measurable case? He says that yes, yes, it's tough, so despite he was already old in politics. But then, but then Volodya Bogachov helped us. So this is our colleague from Moscow, who is a high quality professional in measure theory. He proved for us the version of some corresponding underlying fundamental results from the probability theory. So some, strangely enough, some very natural statements of probability theory, they were not proven in the, in this context, in the measurable context, which is needed for us. Okay, so essentially I told you, I told you how this works, and then now, this is really the scheme, this really the scheme of these two papers. It is much easier to read my paper with, with Huylin, because it was second, and because then, there we use stronger condition, but the real result is here. And of course once again, so optimal control is here. That's for almost, that's for almost every eta, dense image, this is optimal control. This is quite non-trivial development of optimal control, optimal control for partial differential equation, which is due to Wachagnand and just Armand. They developed this for some previous purposes, and this it applied. You see, in this work, we practically completely isolated this optimal control part. So optimal control in this our series like this, simply one have to check condition B3 prime. Condition B3 prime has to be checked using method of optimal control, which is essentially due to, based on what Andrei Graschovic and Andrei Sarichovic have done, and which was developed by Armand Wachagn. Okay, so just thank you for your attention, that's all. So we have 10 more minutes.