 Так, я думаю, що я початкую після того, що я збережуся, як 5-10 міниці. Так що, вважно, я просто маю вибачити, що я говорив про це, якщо не легко, але, точно, на лише, на який я розумів, що це не випадаво. І потім, в піперівці, які ви можете розуміти, з цих референсів. Так, я говорив про нонлініопаршал-деференціал-еквейшан, які вважаються в форме 1. Мільйонів, литералі мільйонів реалістиків, чи селістиків. Реалістикаль-паршал-деференціал-еквейшан може бути в цій формі. Так, тому рівні дирівативи плюс лініопаршал-еквейшан, плюс нонлініариті, плюс нонлініариті, каже до вважній селістик. З кількою селістиків це ето-оф-т і це ето-омего-оф-т. Зараз є інішальна кондіція. Є інішальна кондіція. І я... Я рекомендував, що ця еквейсія вважає в цій цій формі. Я вважаються на нонлініопаршал-деференціал-еквейшан для функції еф-тк. Я дивився не за чим який співав, бо звідси є відео. Він пам'ятає, що у зв'язку аж я маю фіжи, що має вигадані функціону оператору Л. Луфіжи кулася до Л'Амда-J, Л'Амда-J фіжи, і Л'Амда-J є відео 1, і Л'Амда-J йшли до відео. Л'Амда-J, це завжди так. Звісно, мій звідси так, це пресайсно, що фіжа Грошов каже який має світ і який має контрол. Привідувайте, мій звідси є відео, Л'Амда-J, є відео, який має фіжи, це відео, який має фіжи, а фіжи, аніжи, і який має фіжи, і який має вигадані функціон, який має контроли, або і вигадані функціон, for me it is random force. Bj, Bj, what is my, what is my notation? It's a j, it's a j, it's a j, right? I can regard this, it's a j of t, I can regard it as a control, and then this is what what Professor Agracio spoke about, right? Because you see, you should clearly understand that that phi j, I can regard it as a vector field in the space a, a, a, j as a constant vector field, right? So this is, this is vector field phi j, right? So this is precisely what Professor Agracio called, called, called, called a fine control, called a fine control, right? Then, then I told you that first summation goes from 1 to m. m may be either finite or infinite. Then I said hm, hm, this is linear envelope of this, of this elements of basis, which are involved here, phi j, j from 1, j from 1 to m, from one term, yes? So the noise sits in the space hm, but I study, but I study the equation in the whole, in the whole space h, so and this is how they, how they interact, this is the equation. I make one assumption which is, which is innocent, no physicists will ever, at least for the level, you see, in 20 of 30 years, well, all this will be heavily developed. This assumption, that all it's a j's, they independent identically distributed, they people may, may, may relax it. But now for the moment this is absolutely perfect. So different, so different, different random processes which stands here for different, for different j's, they are independent, identically distributed. It means the following. I take one fixed process, eta 1 omega of t, it's a random process. And I make, and I make, it I make independent copy of it. And these are, and these are, and these are processes eta 2, eta 3, and then eta 4. They are four to describe properties of these processes. I can, I can describe only properties of the process eta 1, because all other processes, they are, they are, they are the same. They are the same. Important quantity for me is quantity b, which is summation of bj squared, which is finite. Which is finite quantity. So it means that, it means that I am, I am spoken about. Okay, well just, we will, we will see a bit later, what does it mean? What does it mean? So then apart from this, rather, apart from this rather, rather, rather innocent restriction. I impose another restriction, which is much less innocent. Much less innocent, but which is not, which is not physically absurd either. So physicists do not really object. But it would be better to, to remove it. So this is the following. I define segments jeta l, which is a segment from l minus 1, l minus 1 to l. Я розумію, що ристрикція у процесі jeta1 у цих сегментах, де l еквону до 1, 0, 1, 2, і так. Після часу я можу рігати ці процеси, як процеси на цих сегментах 1, а цих сегментах 1, якщо 0, 1. Після часу я думаю, що я рігати ці процеси, як випадати ті, від 0 до 1. Після часу я garantити, що ці процеси, як Vis lactaten у цих сегментах TáOL одягурно imaginable дозб devasterooba який був дуже добре, щоб відбувати цю сенсію, з чимось більше фізичним. Тож ристрикцію цих процесів на цих часах інтервіалів, які з'являють у професорі Грошові, вони тоді індепентно, ідентикально дистрибюватися. Спочатку, вона є великі експедиції у процесі також, які випадують ціфактори від фізичних цих процесів. Їто воно. Дякую. Їто воно. Воно. Воно. Воно. Воно. Воно. Воно. Воно. Воно. Воно. Воно. Воно. Воно. Воно. Воно. Воно. Воно. Воно. Воно. Воно. Воно. Воно. Воно. Воно. аж аж альжебра, який вирішив з виспецію міжі, з виспецію М. Я сказав, що якщо ми зберемо тривіло, якщо виспеці М є файній, або виспецію, якщо ми звичайно виспецію Тіday, якщо ми виспецію М, то всі виспеці М є асимофіг. В ц manageable example of polling space. So, from the point of view of measures theory, all spaces of measures are the same as spaces of measures on a network. This is deep series, highly retrieval result, Actually, i will not use it, but it is just good to know. happens to be lucky to know this also, what I told that, of course, seems locastic setting. Then we have somewhere here on the corner, we have probability space on which my random variable depends. Probability space is an omega fp. Omega is the space and f is the sigma algebra and p is the probability, if in any measurable map from the space omega to Zoo-ack-a-respondent Polish Space M, which is given Barille sigma algebra. It is measurable, it means that pre-image of every measurable space belongs to this sigma algebra. Then I told that extremely important is the notion of the law of random variable psi. The law of random variable xi, this is a measure on the space m, which is defined by the following relation. This is such a measure, so that the measure evaluated for any set q equals to probability that the random variable xi belongs to q. This gives me a relation that for every random variable corresponds to a measure. This mapping, this correspondence is surjective. This is a very well-known fact, so that if you have any probability measure on the space m, you can find a good random variable such that the law of this random variable is this measure. And due to what I said above, when you do this, you always take for omega sigma 01. Actually, for all of series probability, probability space, which is sigma 01, is sufficient. No other space are needed. So now, why distributions of random variables are so important? This is because of the following formula, which is... Let us take a bounded measurable function defined on a polish space m. We have a distribution of some random variable, which is a measure. I have a function, I have a measure. I can integrate function against this measure. How to do this? This is trivial but extremely important relation. I will give it to the name A, I will refer it from time to time. Relation is obvious, but in some context, it will be not so easy to see that what I'm talking about. So what is this relation? So if I take any function, measurable function on the space m, if I integrate it against a measure, which is the law of a random variable xi, so this is simply I have to do the following. I have to take composition f of xi, and I have to take expected value of this object. Since this is what physicists are concerned with, as I said, as soon as physicists consider any model which depends on random parameter, any equation which depends on random parameter, any function which depends on random parameter, then immediately they have this tendency that instead of considering function like energy, like moment, they immediately have tendency to study the expected values. But for us, this is rules. So what they look at, this is obtained as an integral against this measure. And just a tautological remark that this is nothing but integral over the space of omega integral of what? I take f of xi of omega of omega pd omega, pd omega, right? So this is why the distribution of random variables are so important. Another construction, which is very closely related, and which I will use always. You see, this is what I am talking to you so far, which I call probability theory, is not probability, it's just measure theory. You can find it in many courses in functional analysis in the section spaces of measures. This is not probability theory yet. Probability theory, good definition, is that probability theory starts when we start to consider independent random variables, right? But this is functional analysis. One more important construction, which will be important for me, is the following. Let us take m and n be two Polish spaces. Let f be a measurable mapping from Polish space m to the space n, right? It means that the image of every barrel set here is the barrel set here. Let us take a measure mu here in the domain of definition. This is a measure in the space n, right? Then given a map f and given a measure mu here, I can construct an image, push forward of the measure mu under the map f. So push forward of the measure mu under the mapping f. This is a measure on the space n, and it is defined by the following relation. It is a measure such that being evaluated for any set q, this is simply measure mu of the preimage. Of the preimage, f minus 1, if f minus 1 in few. If we consider this with this definition, right? If we consider this definition with this definition, right? Then we will simply see, we will immediately see that if I have a random variable like this, that the law of random variable xi is nothing but push forward, push forward of measure p, which is this measure, under the mapping xi, under the mapping xi, right? If I have a measurable mapping, if I have a map that sigma algebras then sets move from one to left, we consider preimages. And measures move from left to right, right? So preimage of berylion set is berylion set here. For any measure here, I can construct image of this measure under the set f. It is very dangerous to violate this rule, because even if f is a smooth map from real line to real line to the real line, right? If you consider here perfect berylion set, for example, open interval, it is not true, not true. That image of open interval is measurable set in the right-hand side, okay? So measurable set under any map f, they move from right to left, measures move from left to right. If you violate this rule, we should be very, very careful. So this is essentially, this is mostly this interesting pathos mathematics, which was intensively developed in Russia, in France 100 years ago. And just notation, just notation. But notation is the following. Notation is the following, notation is the following. Look, if I have, let I have a measure mu in some Polish space, Polish space m, and let I have a bounded continuous function on the space m. This is a notation of the space of bounded continuous function, right? Then I will denote integral, then I can integrate function against measure. I can integrate function f against measure mu. This is notation. Sometimes I'll write a test like this, right? And this is just integral. This is f of u, f of u mu of du. Okay, so this is just, this is just notation. This is just notation. This is just notation, right? Now, now let us go back, let us go back to equation and let us, now let us slowly move in the process of study of long time behavior of solution for this equation. Keep in mind these objects, these objects, which I mentioned to you about. Look, what I'm going to do. I'm going as precisely as in the lecture of Prof. Greshyov. I'm going to cut time x, time x to sigma, to sigma of length 1, of length 1, right? And I'm trying, I will construct my solution step by step. First on this segment, second on this segment, thirdly on this segment, blah, blah, blah, blah, right? And I will try to understand, try to understand what happens. What happens, right? And let me recall you, let me recall you, right? That when I restrict the random process, which is the right-hand side of my equation. To, for example, to this segment, which is segment number two, still I make shift of variable. So I regard it as a function of t, where is, where is, where is from zero to one, where is from zero to one. Another symmetric notation. So you see, eta j, where j is lower index. This is component number j of the process eta. Maybe it is a bit misleading, but hopefully not. That eta 1, eta 1, this is the process eta restricted to the sigma, to the sigma j, j1. So process, process eta, when t, where is here, this is, this is eta 1. This is eta 1. And now watch. For every omega, for every omega, for every omega eta 1, eta 1, eta 1, eta 1 of omega, eta 1 of omega, this is a function of t where zero smaller than t, smaller than 1, right? There for it is very natural to consider to define space E. This might permanent notation. I will use it always. Not, not so many. So E is L2 space on the sigma 0, 1, where it is the space Hm, which is either finite, either finite or infinite notation, right? And then watch. For every omega, j1, j1 of omega is an element of the space, right? For every omega, j1 of omega is a curve from 0 to 1. They will be bounded. So this is, this is L2. It belongs to the space, to the space always. There for eta 1, eta 1, this is a random variable, which sends omega to the space, to the space E, to the space E, right? So this is random process when time varies from 0 to 1. This is the same, this is the same as to say that it's a mapping from omega to E, from omega to E, right? Okay, so this is extremely important, another extremely important notation. Since this is a random variable, I can consider a law of this random variable, right? And I will denote it L, and this is the object, which I will walk with always. So this is a measure, this is a measure in this space E, and E, let me, let me keep it here. E is L2 space on the sigma 01 valued in the space, valued in the space, in the space, in the space HM. HM, right? So these are objects. Now, good and interesting question. Good and interesting question. Now I will put, I will start to put, I will start to put restrictions. Restrictions on the, on the space, on the space, on the space, on this, on the measure and just on the distribution. I imposed already some, already some, some restrictions on my, on my process. Restrictions, restrictions where like this. Restrictions where like this. P1, that the different components are independent identically, identically distributed. Ah, restriction P2, P2, that if I take process, process eta1, I restrict to J1, I take process, same process, this is restricted to J2. They are also independent identically distributed. Restriction P3, P3, this is some non-degeneracy, non-degeneracy assumption, which will be, which will, which will appear, appear later. Appear later, so non-degeneracy, non-degeneracy, non-degeneracy. And now, and now one more, one more restriction. Now one more restriction, so look, so what is this, what is this one more, one more restriction? Did they, did they put it? Ah, eta J, maybe, okay. Maybe, maybe let me, let me put it. Ah, I think I told this, I told this already. This will be restriction, maybe, maybe P0. So as the title of my lecture, of my lecture set, I will consider equations where the noise, where the noise is, is bounded random process. That is, if I take, if I take noise eta omega, eta omega of t, then, then this, then the force, the force, which I have in the right-hand side, the norm, the norm of the forces, which I have in the right-hand side of my equation, they are bounded by some uniform constant c. This happens for all omega, for all omega, for all t. For all omega, for all t, right. Now watch. I denote d of eta1, measure the, the distribution, the, the distribution of the, of the process eta restricted to the sigma 01. This is a measure. This is a measure in the, in the L2 space of the sigma of the sigma 01, 01, right. Due to my assumption number zero, this is measure supported by some, by some set, by some set of bounded functions. By set of bounded functions, I need more. I need more, I need condition, condition number p4. That if I denote, if I denote by k, if I, if I denote by k support, support of this measure, support of this measure, because this is minimal closed set, such that L of this closed set equals to 1, right, support of this measure. Then my restriction is that this is a compact subset, compact subset of the space, of the space E. Of the space E, right. Of the space E. So, so and, and one more, one more notation. So you see eta1. This is the process eta restricted to the sigma NG1, and naturally etaR, etaR is the process eta restricted to the sigma, to the sigma GR. Not so many notations, I think it is. We can get, we can get, we can get used to them. Now some very important point. So look, process eta. I naturally restricted to the sigma 01, because on the sigma 1, 2 it will be the same, on the sigma 2, 3 also the same, right. I restricted this on this segment, and I consider this, consider this is a random variable in the space L2, in the space L2, right. But I told you many times, number of times, that in fact this is a bounded random variable. So for any omega, this is L2 curve, such that for every omega and every t, this norm of this L2 curve is bounded by some constant. So it means that in fact I'm talking about, I'm talking about subspace, subspace of bounded, but bounded curve lives in the space L infinity, right. And I insist that I want to consider the law of this force in the space L2. But it sits in the much smaller space L infinity, question. Why I do not want to define L as the law of the random force in the space L infinity? Guesses, why? Why I avoid space L infinity, despite I know that my force is bounded? Because it is not Polish. Space L infinity is not separable. If I try to use properties of the space of measures in the space L infinity, I will find myself in very serious trouble, in very, very serious trouble. Therefor, what we are forced to do, we consider the law of this process in the measure of the space L2, but we always remember that in the fact that it is supported by the space of bounded functions, this is the best solution. So because of the theory of probability, it is very dangerous to leave this comfortable realm of Polish spaces. As soon as we start to talk about spaces, we are not Polish, we are in danger. We are in danger. No, no, no. This is just abstract equation. For me, if you keep in mind complex Ginsburg-Landau, this is HM. No, this is not. Okay, look. So the force sits in the space HM, which is subspace on the space H. Yes, it equals to H. Equals to H. Yes? No, no, no. This is an additional... Right, right. This is not the end of the story. You're just absolutely right. There will be... I impose restrictions layer after layer. I could... I thought it is... But you're right. This will appear very soon. You will see. Now these are definitions, and now we start to construct our objects. Now we start to construct our objects. Now look. As I said, I will examine solutions for my equation. First on Sigma 01, next on Sigma 02, next on Sigma from 1 to 2, next on 2 to 3. So the basic tool to do this is very natural. This is the shift mapping. Mapping S. Mapping S. Mapping S is the following map. It takes our phase space H, where initial data sits. Right? It takes space E. It takes space E, L2, where it sits right-hand side being considered from 0 to 1. And this map sends this back in the space H, precisely, by the way, as in the case of Professor Grachov. So such that in initial condition you 0. And my force restricted to the Sigma 01. 01, right? So this... Oh, maybe to stress, let me do it like this. Right-hand side restricted to the Sigma 01. Deterministic. Deterministic. So Omega is fixed. Just I consider one equation with some one choice of the right-hand side. It is being sent to the value of my solution at t equal 1. So just this is the map S, where of course U of t is the solution. Where U of t for 0 smaller than t smaller than 1 solves equation 1. Solve equation 1. Solve equation 1. Okay, so given initial data U0, value of my solution at time 1 equals to the map S applied to U0 and restriction of my force to the first Sigma. U of 2 equals to the map S applied to U of 1, right? And with right-hand side, which is the restriction of my process to the Sigma number 2, etc. And just more general, we just recursively have that. When we found our solution at time r, and this is S. S means I am going to use this subject. Then to find solution at time and time r plus 1, I should apply map S at the value of solution at time r and to the right-hand side restricted to the Sigma number r. Where r is integer and just bigger than 0. Therefore, to study values of my solution at integer moments of time, I should simply iterate the map S, right? So to study long-time behavior of solutions for equation 1, this is absolutely the same as to study multiple iterations of the map S. Of course, after this, when I do this, I will learn, I will calculate solutions of my equation also in only integer levels of time. Then I have to understand what happens in between. But this is easy. You can find discussion in the papers which I quoted. This is not a big deal, so I will not specify that. I will not specify that. Okay, so this is map S. What is our restriction on what happens here? One restriction is just general. And I will not repeat it because it is absolutely innocent. This is Assumption 0. The map S is too smooth. Why this assumption is innocent? Because if this is a nonlinear partial differential equation and if you know that these equations will post such that for any solution L0 from H and for many right-hand side, Ita from the space E. I have only one solution that always this map is smooth. This is just a fact of life. This really goes together. When you start to learn nonlinear partial differential, you will see that this always holds. You have technique how to do it again and again. It can be found in the papers. It is in these two or three papers which I refer to. Now I have to impose serious restrictions on the map S and to impose them, I have to define, I have to examine and to discuss one more object. So look, since the map S is too smooth, since the map S is too smooth, here is S, I can consider differential of S which is a linear map at any point, linear map from H times E to Ita, but I also consider, I can also consider differential of Ita, of the map S, evaluated at some point E prime Ita prime, which is the map from E to H, which of course Ita prime belongs to H, U prime belongs to H, Ita prime belongs to E, Ita prime belongs to E, right? And this mapping will be important for me. This is as I learned many years ago from Prof. Grachev. This mapping also very important from the series of control and optimal control. Let us talk about this map. Let us talk about this map. Okay. Let us examine how this map acts. Let us take any psi, which is element in the space E, this is just L2K, right? And let us ask ourselves how to construct D Ita S of U prime Ita prime applied to this psi. We know that this is an element of the space H, but how to construct this element? Let us denote it V. How to find out V? Well, for this end, let us repeat the construction of differential of the mapping S on the second variable Ita. To do it, let us do the following. Let us consider Ita epsilon, which equals to Ita prime, this Ita prime, plus epsilon times psi. Psi, this is this guy. Psi, psi is this guy, right? This is the curve in the space E. This is the curve in the space E. By definition of the second differential, this guy V, the value of this differential is defined like this. This is derivative in epsilon, when epsilon equals to 0, applied to S, evaluated at U prime, U prime Ita epsilon. This is simply the definition of the second differential. How to construct this object? How to put hand on it? How to understand which kind of person it is? Okay, let us consider let us consider let us consider let us consider S, let us consider let us consider this guy. To consider this guy, I have to consider equation star, which is the following equation. U epsilon plus L times U epsilon plus F of U epsilon F of U epsilon of T equals to Ita epsilon, Ita epsilon, this Ita epsilon, right? And with the same initial data. U of 0 equals U prime. Then if U epsilon of T is a solution for this equation, then V, which I have here, this is simply Ita epsilon equals to 0 of U epsilon of 1 of U epsilon of 1. This is this is V. This is V. V, not 0, this is V. Well, let us let us consider the whole curve. Let us consider value of this curve, not at T equals to 1, but the whole curve. Let us consider the curve value of T equals to D over D epsilon. Ita epsilon equals to 0 to this to this my solution. U epsilon U epsilon of T. Let us consider this curve. Let us find out equation for this curve. But this is easy. Let us simply differentiate the relation star in epsilon and to calculate this derivative when epsilon equals to 0. Equals to 0. Then look, let us do this. When I differentiate in epsilon the first term I have DT U epsilon tilde, right? Because derivative in epsilon at this guy at epsilon equals to 0 equals to U tilde. No, no, U tilde plus, plus effort L tilde is a bit more complicated. I have to differentiate in epsilon at epsilon equals to 0 this object. But to create all the basic analysis this differential of F evaluated at U of T because when epsilon equals to 0 U0 equals to U apply to U tilde. And this equals to Ita epsilon differentiated in epsilon. But this is just psi. Initial condition does not depend on epsilon. So U tilde of 0 equals equals to 0. Equals to 0, right? So we see that this method which we have here. This is the following one. This is the following one. We'll just maybe write it with right here. So D Ita calculate at some pair U prime Ita prime sense element psi of the space E, which is L2 curve to U tilde U tilde of 1 where U tilde is a function which solves equation to star. Which solves equation to star. Right? Okay. So now we will we are very close to the same panorama of what happens here. So this is my equation. This is time one map along trajectories for this equation. Of course time one map in this setting depends on the right hand side restricted to the corresponding segment. I took out of blue but we will see very soon why it is needed. That linearization of this time map on the second variable is very important. Please note that this is rather complicated map because it's a linear map okay. This linear map but this linear map sends one Hilbert space to another. But these two Hilbert spaces they are very different. Because this is a Hilbert space that is roughly formed by function of T. Because if M is finite for example if M capital is 1 this is simply L2 space of function of T. But this is space of function of X. So this is clever mapping. Clever mapping which transforms function of T to the function of space variable. It cannot be easy. It is not easy. Can we generate some sort of optimistic businessman якщо можна, який ми говоримо. Ви знаєте, що вбачимо ростови ростови. Підберемо ростови ростови на моїй еквейшені. Ці ростови ростови були викими ростовами, а це викими ростовами.伸іш Most of history, англermakers американською яormalською А deserve би нікому Uherput two, because more interesting is result concern on VCA group of restrictions. I will discuss the proof for the case of stronger group of restrictions, but then I will show that it's rather easy to explain what has to be added to couple with this two to handle with this more complicated problem and they have VCA group of restrictions. Так, першу кондіціону є регулярність. Биван без прайм. Наймати цю кондіціону є регулярність. Регулярність. І тут є Биван прайм. Биван прайм. Так. Регулярність є також у босі «Строн» та «Вік». Є банох, чи хілбертспець. Є хілбертспець. Це може бути банох, але це немає. Хілбертспець. Хілбертспець «Ві», який випадав у моєму «Строн» із компакт і дець. І цей випадав є компакт і дець. Комакт і дець. Комакт, компакт і дець. І дець. Що я говорить? Ви пам'ятаєте, що у випадаві комплексу «Гінсьбро-Кландоу» «ХМ» – «Мій Х» – «Соболіспець ХМ». Коли ми випадаємо цей цей рем у комплексу «Гінсьбро-Кландоу», то «Ві» – «ХМ-1». Це типична. Це біць-смуза. Заткнутися біць-смуза. Заткнутися біць-смуза. Це компакт і дець. І зараз віка, зараз сильна реструкція. Заткнутися біць-смуза «ХМ-1» «Ві» – це біць-смуза. Я вже сказав, але випадаєте, що це більше. Є смуза. Бо я сказав, що це біць-смуза таке, але це біць-смуза. Але якщо ми говоримо про нонлініапараболику, то це біць-смуза. І я випадаю, що «С» – «ХМ-1» «Ві» – це аналітик. Це аналітик. Це біць-смуза. Не біць-смуза, це біць-смуза. Бо я сказав, що це біць-смуза. Але, звичайно, ця асамптація біць-смуза сильна. Але біць-смуза – це біць-смуза. У факті, за всі практичні ціпи, це біць-смуза і це біць-смуза. Трактикально – немає ціпи. Тому, якщо ви говорите про функціональні спосіби з якою реалістикою мабі, який був зроблено з реалістикою маніпуляцією. Якщо це – С-1-смуза, це біць-смуза, це біць-смуза. Кондіціон бі-2 це таке. Це випадання. Ність нічого струн-фома і віг-фома. Кондіціон бі-2 – це дисепатівиті. Дисепатівиті це це дуже добре. Це так. Я, дивіться, що каї я діносив вupport на межі ЛК. Це компакт в екрані і ристрикція що зиропіволон в компакті. Ще випадуєш? я можу втискувати рандум форсу. Це дуже важлива кондіція. Ви виймеш, тому що збільшово згадується мовою межі. З цим повідомом я можу втискувати, я можу, це був, що в тисточній кінці є так рівні, як я виш. Це означає, що я можу втискувати рандум форсу. Та технічно, це дуже важливе, так? І від цього, Проте з цього, я вважаю, що є гамма, різніше ніж один. Ніж негативного біта, також два холи. Спершу, вважніше, вважніше від цього, але я притворюю це так. Ніж С, Й та гамма times u, гамма times u plus біта. Гамма times u plus біта. Контракціон, який вважає від диспаціону або іншого. Видається дуже ристриктивно, але це завжди холи для вилпосту на лініопараболіку. Він холи для дименціональних негативних біта. Якщо ви можете виважати, що 3-дименціональних негативних біта є вилпостом, я працюваю, що це також так. Так. Так. Уважніше, уважніше, уважніше, уважніше, який вважає від дименціональних негативних біта для лініопараболіку. Але це треба бути хм, але ти прав. Трібні canonical, також директор. Зараз так, наприклад, есця і біта. Кажеться, якщо я вважію вару вважію вару, то вважене. Гамма. Так, як я уважаю就是 ці іважністи, All my solutions will just exponentially quickly converge to zero. So this is B2. The whole point is the condition B3. So this is innocent. This is just innocent, believe me. This is always true if you are talking about nonlinear parabolic equation. My similiano asked me what happens if it is nonlinear parabolic equation. This is violated. І це grandson. this is always true if you are talking about nonlinear parabolic equation. I am trying to prevent, and I'm trying to prevent, and I'm trying to prevent... B3. B3 is a whole point. B3 is a whole point. This is called non-degeneracy. Is called non-degeneracy. Але зараз не генеральна в сильній формі, а тут генеральна в вікафорі. Генеральна в сильній формі. Для ній у, для ній ета, для спорту, для спорту масер Л, якщо я вважаю, що ця мета, ця ді-эта, ця лінірізація мета, то ця лінірізація мета, ця лінірізація мета, цю обмеку визнати liberated. Це дуже оптимальний контрол. Це пресайсно, що оптимальний контрол втратить. Ви знаєте, тут є саптальний пункт, тому що випадки С. З'являється, що імеч у саптальному пісті С має бути на рівній спеціал. Тут треба денсити, в який є віка-спеціал, а в який є віка-спеціал. Why? Because if I replace here h by v, this is usually wrong. This is the density, which we can really prove for a realistic equation. So with this equation I can prove mixing relatively easy. The problem with this condition is that probably this equation holds, usually holds, if m equals to infinity, but not if m is fine. Not if m is fine. This condition is nice. This is right, absolutely yes. This is linearized controllability, which is the tool to study approximated controllability. The eta of SU eta of E is dense. Right? So second condition is more sophisticated, but it holds very often. It holds for a huge class of forces like this with finite m. For example, for two-dimensional Navier's talks we can take m equals 3. For complex Ginsburg-Landau we can take m equals to 5. The condition, which I will write down here is fulfilled. So look, for any eta of h, for any initial data, there exists subset of good forces, which is big of full measure. Subset of forces of good measure, such that for any u, and for any force from this KAU, the image is dense. D, the linearized mapping, S of u eta, its total image is dense in h. This condition looks not that nature, but this is what we have in real life. And this precisely what quite a number of years ago, maybe ten, Prof. Agrachev explained us how to do this. First it took for me some efforts just to convince him that this is a good question, which was of efforts just to find how to prove it, but then he did it. Okay, let me make it more compact, because I need space. For any u, for any eta, eta belongs in KU. KU, range of d eta of S u, of S u is dense. Now comes the last assumption. This is regularity, regularity of the components of the force. This is regularity, regularity of the force. It is the same, it is before, it is the same. It does not exist in weak and stronger forms, so I write it without separation. Name for this condition is decomposibility. Decomposibility, terrible. Decomposibility of eta1 and non-degeneracy. And non-degeneracy. Okay, so look. Let us consider the process eta1, this one. Let us consider the restriction of this process of the segment J1, because we remember that restriction on the segment J2, this is the same process, but independent copy. Let give to this guy name eta11, which is nature, right? So low index, this is Fourier component, upper index, this is which segment in time. So eta11, this is a random process. This is just a random process, which takes value in the space, in the real line. Then the process eta11, for every omega, it belongs to the space L2, L2, 0, 1. L2, 0, 1, right? L2, 0, 1. Then assumption is there exists a basis, there exists a basis in the space L2, 0, 1, in the space L2, 0, 1, 0, 1. And we call this basis E1, E1, E2, etc., etc., such that, such that I can such that the process eta11 is simply random series with respect to these basis. Random series with respect to these basis. So look what does it mean? It means that eta11 of omega of t, this is summation from 1 to infinity, total decomposition of the basis Ej and now quite similar, here comes Bj tilde, which measures how big is the coefficient of the composition number J and here comes xi j, xi j omega and here comes xi j omega. Here comes xi j omega, here comes xi j. Here I assume that all Bj tilde are non-zero, so we have real non-degeneracy, and here I assume that again is xi j's, xi j's, xi 1, xi 2, etc., а, again, independent, identically distributed, random variables, random variables from omega to real line, to real line, right? Such that, oops, not good. Such that, I will move, I will move here. Such that, such that the law, such that the law of every coefficient of every random variable xi j, it has a density with respect to the Lebesgue measure, it has a density with respect to the Lebesgue measure and this density is supported by a sigma minus 1, 1, supported by a sigma minus 1, 1, right? It means that these random variables xi j, they are bigger than minus 1 and smaller than 1, they are bounded by 1, bounded by 1 in modules. Rho of R is a Leipschitz function, is a Leipschitz function, right? And also Rho of 0 is positive. Rho of 0 is positive. So these are our restrictions. These are our restrictions. This, again, after this, now, with these restrictions, of course, you see, we have to check consistency of these sets of our restrictions. We have to check that this set of restrictions is consistent. Which restrictions on equations, as I said that quite a lot of nonlinear partial differential equations satisfy restrictions b1, b2, b3. What about restrictions before? Restrictions before it is always fulfilled if you take some non-trivial basis in the non-trivial basis Ej in the space of function, namely if you take hard basis. Excuse me? No, no, it's absolutely different. Ej is a function of t, of time, of time. So this is a bit tricky. This is a bit tricky, so my functions are, you see, watch the following. Let us consider space of function u of tx. This is what we really keep in mind, right? Let us consider l2 on 0,1 times td. Let us consider the space of my solution when time is from 0,1 and x is in z. What is the basis in this space? The basis in this space precisely is formed by all pairs of functions Ej of t, phi j of x, phi l of x. Right? These products, when 1 smaller than jl smaller than infinity gives me basis in the space of function of tx. I need both. I need two basis to construct a base in the space of functions of the, in the space of functions of two variables. Right? So this is, so these are, these are the conditions. These are the conditions. They will be imposed, imposed always. So now in words, in words, if this condition holds, then the equation one is mixing. But then first I want to talk, develop language of Markov chains just to define properly. Maybe, maybe just let me, let me state, let me state the theorem right now. And let me just, and why this, just to show you that this is something really important. I will, I will state, I will state a theorem maybe in not quite, in not completely rigorous way right now. And then I will develop rigorous language and I state, and I state the theorem, the theorem properly. Properly. Then theorem. Theorem. So yes, it is absolutely rigorous statement. It's correct. If either strong condition holds, be one before, or weak condition holds, or be one prime before prime, before prime holds, right? If this condition holds, then, then, then. There exist one and only one, there exist unique measure, mu star in the space of measures in H, in H, right? Such that, such that for any initial data U0 in the space H, for any initial data U0 in the space H, if I do the following. Now let us, let us consider this solution, this solution U of T satisfies. This solution, this solution U of T satisfies. Satisfies. Satisfies. Satisfies. Okay. I take my solution U of T. It's a random variable, right? Since it's a random variable, it has a law. It has a law. Law of random variable is a measure in the space in the space H, right? I have this very special measure in the space H, mu star, and this measure converges to this measure weekly, weekly in the space of measures. Precisely it means the following. Precisely it means the following. It is the same as to say. If I take bounded continuous function, if I take any F, which is bounded continuous functional in H, right? Then if I take F of U omega of T, if I take expected value of F of the omega of T, that this converges to the integral of function F against the measurement star, right? And I stress that it holds for every initial data U of 0. Another important, extremely important point here that the convergence is exponentially quick. The convergence is exponentially quick. It's exponentially fast. It's exponentially fast, right? This is what physicists know very well. Just one example. One example is that in physical community they believe that essentially they just agree, so practically everybody. That right language to describe water turbulence is the following. We take 3D Navier-Stokes equation and they believe that it's well-posed, right? We force this equation by random force. This, they would be happy with this random force, right? And then they believe that this system is mixing, that this property holds. And then when you open and if you take this pile of book on the theory of turbulence, the pile like this one maybe ten times like this, right? That all these books are about this measure. So all these books are only about, they call it limiting statistical equilibrium, right? But this is really great, right? So you see, our system forget initial data. Our system, if you talk about statistical properties of solution, then from this point of view, our system forgets initial data. Forgets initial data, right? So this is mixing, right? And you see, and this is crucial. So this is actually, I'm sure, I'm really sure that these conditions be one, they're very close, very close to necessary also. So you cannot, you cannot violate, seriously violate one of these conditions and think that they, I think that if you violate them more or less seriously, then the result may be wrong. Then the result may be wrong, right? And I repeat that the condition B3, this is essentially condition in terms of, in terms of optimal control. There is, this is why, this is why the school is joining these two lecture courses. But there are three ingredients in proof of this theorem. Optimal control, how this condition plays. Technique of Markov processes, which I will explain right now. And the component number three, this is the method of quadratic convergence. The method of quadratic convergence, this is also very, also known, often called, often called as KAM theory. And this I will clearly explain you. Because KAM theory, if it abbreviates for, if you understand, for example, it's in the version of Kolmogorov, it is extremely hard to explain it on the lecture. But here everything happens in such and such a form that it is very possible. So my lectures will be about this. Now I have to explain who are here Markov processes. I will develop the language of Markov processes. And I will explain the doubling approach to proof mixing. Because I did not know if you really have sources, but in his lecture Andrei Grachov explained how to prove mixing and finite dimensional evoking tools of optimal control. My proof is different, is different. I will tell you about different proof of this result, which originated on the approach of Dioblin. Dioblin, Dioblin. If you do not know this name, I suggest to look to Wikipedia. Because it's a person of remarkable and tragic life and absolutely brilliant mathematician. It was a really remarkable French expert in probability, Paul Levy. I think he was the only student of Paul Levy. Maybe Levy has two or three, but no more. Just look in Wikipedia, you will see why. Okay, so now I will start. It is not that a big deal. No, maybe just what is left from this lecture is not enough. I'm starting to tell you about Markov's processes and just Markov's chain. Strangely enough, if you take... No, no, no. It's another story. No, no, no. This, as I told, this is what this tons of books on probability are about. Okay, this object exists. No doubt, they even don't talk. But then they start to how energy is distributed. And they write one volume and another, 10, 50 volumes. So this gives you an object and then you start to study the object. And then you start to write books. Since no proofs are there, so you can write 50, 100, 200 books. Okay, so now... Okay, so now you see. So now we are talking... Now look, so where is this S? I have erased this, which is not a good idea. Okay, so anyhow. So we have this system S. We have the system S, which is iterated iteration, which are made by iterations of the map S. So system S is obtained by the following rule. To calculate U of k, I have to apply the map S to U of k minus 1 and to eta k. This is the fourth restricted to the segment number k. And this is for k bigger or equal than 1. And then U of 0 is given. U of 0 is given. You see, now who is this U of 0, who is given? This U of 0 is who? It is very important. Very important that this given initial data, U of 0 in h, is either a constant, is either a constant, is a constant, a constant, or is a random variable, or is a random variable in the space h, independent from the forces. Independent, independent, independent from from eta 1, eta 2, etc. And this is how it should be. This is how it must be. This is the law. This is the law. If you try to consider initial data, which are not independent from the forces, you arrive at absolutely different problems. This is jungles. But this setting is good and we can slowly move forward in this setting. Right. Now look. Now first trivial observation, which is very important to what follows. You see, let us consider the law of the random variable U k. The law of the random variable U k. Look. This is the law. This is the law of the of s applied to U k minus 1, which is a random object. And to random force eta. And to random force eta. Right. And to random force eta. Ah. Ah. Okay, but now let me... No, no, I have to start with something different. So look at this relation. Look at this relation. From this relation we see that U k. U k is a function. Is a function... Is a function of... Of what? Is a function of initial data of the force eta 1 up to the force... Up to the force eta k. Right. Up to the force eta k. Right. Function, some complicated function. This is why... This is why it's a big job to start this. But it's a function of U 0 eta 1 eta k. But so it is independent. It is independent. Thus it is independent. It is independent from from eta k plus 1, eta k plus 2, eta k plus 2, etc. Right. You see. So this is... This is where from Markovness very soon will come. Now look. Now this is... Now it's precisely like this. Eta k. Right? Eta k. But now look. It means that this is... That this is the law. It means that this is the push forward under the mapping S of the law of the pair of the pair U k minus 1 omega omega eta k. Eta k. Right? By... From what is written here, from what is written here, the random variable U k minus 1 is independent from the random variable eta k. But this is one of things which everybody should remember. Since the random variable eta k minus 1 is independent from the random variable eta k, then the law of the pair of this random variable is the direct product of the two measures. This measure is the direct product of the two measures. But the measure number E k I gave a name to this measure. This is the measure L. This is the measure L. Right? So this is... This is what I said. So this is... In the theory of probability, this is the common wisdom. So theory of probability... So functional analysis ends. And the theory of probability starts when we start to talk about independent random variable. You see, so this is... This is the vector. This is the vector in the space H... Ой, excuse me, U. U. U, U, U, U. U, U, U of k. This is the pair which belongs in the space H times E. Times E, right? So this is a measure в кIVE, в кive, у кive,damnym applicants of the law, in the direct product of the spaces. Because of independence it is the product of the measure here times times measure here, this is construction how to arrange a direct product of the measure. This is generally and just rather easy. Now let us go here. So this is a mapping S, this is a mapping S applied to the law to the law of the applied to the law of my solution My solution when time k-1 times times times measure l. Look this is explicit formula. It is explicit formula. But it is not easy explicit, it is not easy profic from it, because it is extremely complicated. Because okay this is saying, so we iteratively construct, we see that we can iteratively find distributions of the random variable eta1, eta2, eta3, etc. iteratively using this formula. Using this formula. Therefore, therefore, the sequence of the laws, the sequence of the measures, the sequence of the measures, which is the law of the solution at time equal 1, the law of the solution at time equal 2. The sequence of the measures in the space H. Depends only on. Depends only on. Only on. It depends only on initial data U0. Depends only on the measure L. Depends on the mapping S. What is very important here? There's something very important. Something very important. If the system S is defined like this, and if we assume by assumption that eta1, eta2, eta3, they are independent, identically distributed, random variables in the space of C of E, right? Then, in fact, our solution, the distribution of our solutions in integer values of time. They do not depend on specific choice of these random forces, but only on their law. On their law, right? This is very convenient because at some stages, at some stages of our construction, it will be convenient for me to tell, let us now consider, let us now consider a bit different, a bit different forces eta k. Still that they are independent, identically distributed, and their laws are L. So, for example, OK, right. You see, so this formula, which we have here for every physicist, so what is written here? That the law of solution at time k plus 1 depends only on the law of solution. The law of solution, what is written here? The law of solution at time k depends only on the law of solution at time k minus 1, and on the law, and on the law of the force, which we used to stir our system, right? For every physicist, it means that this, that what we have here, u k omega, 0, 1, 2, і так, is the Markov process with discrete time. The name of Markov process with discrete time is the Markov chain. So, physically, this precisely means that u k, that just u k is the Markov chain. Unfortunately, if you take any book, any text book of Markov chains or Markov processes, you will find the much more complicated definition, much more complicated definition. Because this is a very special. It's an easy case of Markov of Markov chains, Markov processes. So, what I will do next time, I will introduce what is needed from the theory of Markov chains and Markov processes just to follow what follows. Because really, indeed, so this situation, and this is not, by the way, because this book on Markov systems, they are good. Simple, they are they are written to to treat general case. General case is really much more complicated. But this is very easy, very nice and very clean, very clean case of Markov processes. We will discuss them next time, but now I will do something just to finish just to finish our lecture. I will do something simple. I will do the following. So, you see, as I explained, we are free to replace this sequence of random forces to sequence of random forces which are like, which we like. So, then there is one choice of random forces which are like the best. Which are like the best. This is, let me call this canonical model. Canonical model. So, canonical model, now I will show you how to define this sequence of independent identically distributed process of these forces. It's okay, which are really more convenient, which is the most convenient to to work with. Easy. Look, we have our probability space omega, omega, omega, omega fp. Let us take let us take independent copies of this probability space. Take independent copies independent copies of it. Independent copies of omega. That is, consider probability space omega1, omega2, omega3 which are simply independent copies. Omega1 with sigma algebra f1 probability p, omega2 with f2p2 but they are the same. And let us do the following. Let us assume that our force eta k eta k, eta k, which I have here. It is eta k, eta k, eta k, which depends on the variable omega k from the random space, from the random space omega k. Then clearly they are clearly they are independent. Right? Because simply because they are defined on just different probability spaces. And let us simply and of course let us choose this is sort of a copy of the same random variable. Copy of the same random variable, right? So eta k is independent copy, that is what I said. Independent eta k omega k, omega k to omega k, omega k to omega k to r. Independent copy independent copy independent copy of of eta 1. This sounds trivial but this model helps a lot to make everything cleaner, cleaner, more more more elementary and easier. Just one remark, so to say you may say that formally this canonical model does not fit to what I told you. Because I told you that we have probability space omega. We have random variable defined on omega. But now instead of this I suggest you countable many probability spaces and my random variable is defined in this way. Solution is very easy. Let us consider real probability space omega nu. Which is direct product of all this. Omega 1, omega 2 blah blah blah. And with sigma algebra which are direct product blah blah blah. Then simply this fourth number k depends only on the component k of this big probability space. Right? This will be very convenient to write explicitly the constructions which I want to write. But I think I will finish on this. I will finish here because next topic is called preliminaries on Markov chains. And after this preliminaries I will state the theorem in a way which is the most suitable and I will explain how to prove it. So this is the end of the story for today.