 I'm just going to give you today a very, very quick welcome actually because unfortunately I have a coinciding commitment that I will have to leave you to your workshop, but maybe tomorrow morning I will give a little presentation about the activities of ICTP. I think I'm sure that several of you will have heard or will have been at ICTP before. For those of you who have welcome back, for those of you for whom it's the first time, welcome to ICTP. It's a very great pleasure to have you. Abdul Salam International Center of Theoretical Physics is a very unique institute. It's an institute with a mission to help promote excellence in science in developing countries. And so maybe tomorrow morning I will give you a little presentation of the wide range of activities that we have. This school and this workshop that you're participating in is just one of several activities and many of you will I think maybe be interested or be eligible to participate in some of the others. So I will give you that information tomorrow morning. And for the moment let me just again extend my welcome to you. I hope you have a very fruitful and productive workshop. You interact very much with the lecturers and speakers who will be here and you make the most of your week. Thank you very much. Is this something? Excuse me? Do you have to project? No, no, no, just you. I'm a good speaker, yes. Just blackboard. Okay, so I will talk about something, something which is rather new and despite the subject is very classical. So let us start with one of two examples which I keep in mind. This example is called complex Ginsberg-Landau equation. This is a classical nonlinear partial differential equation which people often do consider because it is not, it is both important and not too complicated. So unknown is function, function u of tx. This is a complex function. t is time. I consider, I keep in mind periodic boundary condition which I, which I decode as the space variable belongs to, belongs to the dimensional tors. And this is the equation. Okay, CGL, I, I, I try to save my, the space. dt of u, d over dt plus comes linear operator u minus Laplacian u minus i times, i times Laplacian u plus, plus i times norm u, norm u to the degree, to the degree to r u. Right? And this, and this equals to, equals to right hand side which is the, which is the, which is the fourth, right hand side is the fourth and the fourth is, and the fourth is eta of tx, eta of tx. And the whole point is that, that I want to understand what happens, what happens to solution, what happens to solution for this equation when time is very big. So here u is, u is unknown like here. We take, we take, we take initial condition u of 0x which is, which is a given function, which is a given function u0 of x. And we try to understand what happens with the solution for this equation when time goes to infinity, when time goes to infinity. And this question can be, can be understood differently. Now we take advantage from the fact that, that we have a random parameter in our, in our equation, right? So the fourth depends on, on the random parameter omega. Solution also depends on the random, random parameter omega. Therefore we have two advantages and disadvantages. We have advantages, both advantages and disadvantages from the fact that we have two part of mathematics here, partial differential equation and, and, and just stochasticity, stochasticity. Advantage of the fact that, that we have this stochastic probabilistic setting is the following. That, that we can understand the question, what happens with solution for this equation when time goes to infinity in the sense in which physicists usually understand it. How, how they do it? How they do it? Look, let us regard my, my unknown solution as a q u of tx in some, in, in some functional space h, in some functional space h, right? Then by the way, keeping in mind this, my equation, my equation which is the complex Ginsberg-Londau equation, right, becomes, becomes something like ordinary differential equation in the function space h. So I can, I can write, I can, I can rewrite, I can rewrite non-linear partial differential equation as an ordinary differential equation in, in certain, in certain space of function. d over dt of u of t plus linear operator l of u of t, l of u of t plus, plus non-linear path f of u of t equals to, equals to the right-hand side, right? And now, for me, linear path is only this one. So now I keep in mind that l of u equals to u minus Laplacian u and f of u equals to, equals to minus i times, i times Laplacian u plus i times norm u to the degree to r, to r u. And u, and u is, and r here is a natural number, r here is a natural number, r here is a natural number. And then just, what physicists usually do? At some moment of time t, u of t is a function of x. Let us consider some functional of importance on the, on the space of function. Something like energy, energy of our, of our field u of tx, something like momentum of our field of tx, something like spin of our field of tx. It means that I, that I, that I take a certain functional f, which, which is a, which is a functional, which sends space of function, space of function in, in, in reals. I, I, I consider what, what physicists call, what physicists call observable, observable, right? For example, I measure energy of my, of my solution at time t. I measure momentum of my solution at time t, right? But now I remember that everything depends on the random parameter omega. Since I have random parameter involved, I can take, I can take mathematical expectation. This is a mathematical expectation, right? And then what, what interests physicists the most? This is what happens, what happens when, what happens with this quantity at time, when time goes to infinity, when time goes to infinity, right? So this is, this is what they, this is what they want to understand. And something very nice happens here because very often, very often the answer to, to the question what happens here when time goes to infinity is given by the following beautiful object. Look. Let us consider p of x, the space, the space of measures in the space in the space h. Measures in the space h. Measures in the space h. Measures in the space h, right? Let's consider the space of measures. And let us consider then, then very often the answer is given in the following, in the following form. In the space of measures there exists unique measure mu star, which is a stationary measure, which is, which is called a stationary, stationary measure for my, oh excuse me, excuse me, time, time, time goes to infinity. There is a measure in the space of functions such that, such that when time goes to infinity, this expected value of the observable quantity is simply given by the integral. I have to integrate over all the function of space h. The quantity, the quantity f of v with respect to this measure. Mu star, mu star dv, mu star dv, right? This is of extreme importance for physicists and for physics. Very often, and this is what physicists care the most. Very often when you start to read physical book or physical papers where the, where the corresponding physics is governed by some partial differential equation like this one because this very, very physical equation. You will see there that, that the only thing what they care about is this, is a stationary measure because they care only about the limit, only, only about the limit, right? So the goal, the goal of my lecture is to show that, that now some synthesis of rather basic ideas for partial differential equations from from rather basic stuff from probability and rather delicate recent advantages in the series of optimal control allows to prove that, that, that this is really what happens for very big class of equations which can be written in the form one in the form, right? And I, and I will be able more or less explain you how this, how to prove that this limit exists. Right? I will never assume deep knowledge of partial differential equations. Essentially, what I will assume that you know basic spaces of functions, for example, sobolev, spaces of sobolev functions, right? And basic properties of, for example, heat equation which is, which is linear parabolic equation, right? Rather, I never assume deep knowledge of the, of the probability and, and stochastic processes. What I really assume that you really, really understand, this is the concept of independence of random variables. Independence of random, of course I will repeat everything, but if you do not know what does it mean, independence, it will be really hard. So what does it mean, random variable? Random variable? Random variable, right? And it is very important for me that very random, every random variable has, has this distribution which is, which is a measure, which is a measure, right? Distribution. What else distribution? Distribution, right? Another, what, another sort of say, warning that very soon I will rewrite this equation in this abstract form one. And then you say, and then this becomes an abstract evolutionary equation in some, in some, in some Hilbert space H. If you are not good with partial differential equation, you can imagine that this is, that this is finite dimensional space. And then what I'm talking about is an ordinary differential equation with linear pattern. This space, which is perturbed, which is perturbed by some, by some random right-hand side, right? So this is not, this is not that important for what I'm going to talk to you about, that this is, that this partial differential, the whole mentality, the whole logic applies, applies also to the case when, when, when this is just ordinary differential equation with, with a random right-hand side, right? Still the same problem exists. Let this ordinary differential equation, what happens to solution when time goes to infinity, right? And you can ask the same question, right? So you see all what I told you about. It remains meaningful. If the, if the, if my functional space is an, is some, is, is, is just finite dimensional space and we are talking about ordinary differential equation. From this point of view, difference between partial differential equation like this one and ordinary differential equation of this form, it is not that big, right? So this is what I'm going to talk about. Now a bit more specific. Now a bit more specific, now just, now just references. Because I will to, I will to talk to you about what I will, what I will talk to you about is, is rather recent, is rather recent because as I told, this is a rather fortunate, rather fortunate mixing of some new results from optimal control and from, and from, and from other fields. And by the way, from relevant results from optimal control, this is what Professor Agrashov will talk about. So now, now the references for, for, for what I'm going, for, for what I will talk to you about. So because the duration of my talk is not, is not sufficient, of course, to, to give you details. My, my idea is to explain what happens in this problem. And then you can find all details, all missing details in the publications which I give you references now. One, one, this is, one, this is a paper, my paper with my friends and colleague, Nershysian Shirikyan. So this is Cuxin, Cuxin, Nershysian, Nershysian Shirikyan, Nershysian Shirikyan, and this is, and this archive, it is still, we hope that it will appear before the end of this year, and this is 1802, 03, 2250. But if you type our name and if you type archive, you will see it. The only warning that later this month we will put to, to, to archive revised version, so we, we, we still keep trying to improve presentation, right? Here, proof, proof here is not at all easy. And then in second paper, which is, which is my joint paper with, with, with Huilin Zhang, with Huilin Zhang, Huilin Zhang, who is, who is here and who also can be addressed with questions if you have them, right? This is in, this is, this archive also 812, 11706. Here we applied some ideas from this paper in a bit easier setting, I will explain everything. And presentation of this paper is easier, so I think this paper it is really, it is really readable, it can be, it can be read. Also, third reference is, is my review paper with ideas. This is, this is just the only ideas here. This also archive 1901, 11, 225. This, this one is published, but just that archive was easier to find, right? And all details concerning, concerning partial details with partial differential equations and, and with relevant facts from Stochasticity can, can be found in my book KES, which is, which is my, my book with Hermann Schirriken. Schirriken, Schirriken. And the title of the book is, Mathematics of, of two-dimensional turbulence. Mathematics of two-dimensional turbulence. Of 2D turbulence. This is a book. This is a book published by Oxford University Press in 2012, so there are all missing details, all lemmas and serums which are used without proof can be found. And there is a shorter version of this book, which is a book of me alone. Of me alone, time, title for this one is Randomly Forced, Randomly, Randomly Forced, Randomly Forced, Randomly Forced Non-linear PDEs. Randomly Forced, Non-linear PDEs. Non-linear PDEs. And blah, blah, blah, title is long, but it is enough, right? And it was published in the year 2006 in, in European Mathematical Society. So, so all ideas can be found, can be found here, but proper, proper realization of these ideas with all missing details are here, okay? So this is, so advantage, advantage of my, of my lectures is I repeat that I will be rather closely follow, follow these texts, but I will explain all missing, all missing, all missing details, okay? So this, this is what I'm going to talk about. This, this is what I'm going to talk about. So this is, so to say, panorama, panorama with references. Now comes, now comes real introduction, real introduction, right? So real introduction is okay. Let us take complex, this, this is one of the most famous, famous equation of modern physics. Say if they, if you ask physicists for the short list of five most important non-linear partial differential equations, this one will be, will be there. Here unknown is a complex function. So this is, this is a, this is a non-linear, non-linear partial differential equation for, for a complex function. It is constructed in this way. Look, if I will erase this term that what is left is, what is left is non-linear Schrodinger equation. Many people here can know or should know this because this is famous Hamiltonian partial differential equation, right? But what they added, I added to this equation friction because, and, and this is, this is just linear heat equation with, with extra, with extra friction being added, right? So this is melange mixing of two equation of, of non-linear Schrodinger equation, which is stabilized by additional friction. And they added to this equation right hand, right hand side, which is a random force because, because in real physical system we just very often have, very often have random force, right? Now, now notations. My structure of the black boards will be the following. I will, I will keep on the left of the left side equations which should stay on the black board forever. So, and, and all my, all the text will be, will be to the right from this line. Okay. So, so now let us some basic definitions. If we are talking about partial differential equations, we have to, we have to, we have to develop at least some, at least some terminology. Well, first what I need, I need L2 scalar product. L2 scalar product. Well, everybody knows what does it mean L2, L2 scalar product. Here I have tiny specific. Since u, u of x is a complex function, then this should be taken into account. So, scalar product of two complex functions, u of v. This is real part, real part of integral over, over the dimensional torus of u bar v dx. dx, you see? So, for example, norm L2 norm squared, of course, this is just integral of norm u squared. So, this is, this is a good consistent, consistent definition, right? What I will need, this is a Sobolev space, Sobolev space Hm. I have to assume that you know what does it mean. Of course, formally I am giving a definition now. Sobolev space Hm, and I will care only about, only about natural ms, right? Sobolev space Hm. Is the space, is the space of function u of x? Such that Sobolev, such that the norm number m, Sobolev norm number m is finite. And who is Sobolev norm number m? Well, it is defined by this relation. If I take a function u of x, that its norm number m squared equals, equals to its a. And this is scalar product, and the L2, L2 norm, this is L2 norm of u. This is, of course, scalar product of u with itself, right? So, this is simply integral of norm u squared. Yes, thanks. So, norm number m squared equals to norm, L2 norm squared plus summation. Here I have all partial derivatives dx alpha with, with model alpha equals to m. And I take, take L2 norm, L2 norm squared, right? This is classical Sobolev space, no tricks. I, I do not need this fancy developments in partial differential equations which were made in last ten years. This is very, very basic. Everything is very basic. Now, look, why I took here u and u minus Laplacian and not just, and not just minus Laplacian? This is because of my, of my boundary condition. Look, if I consider operator minus Laplacian as operator, of course it maps space h m to h m minus 2, right? But because of boundary condition it, it, it has a kernel. Therefore, better operator is the identity minus, identity minus u, right? This operator has no kernel. It defines isomorphism in the spaces h m and, and h m minus 2. This is well known. If you decompose function u of x into Fourier series, you will immediately recover this definition in terms of Fourier coefficients and you will see that this is simply obvious. This is simply obvious. This is, this is almost autology. This becomes this fact, right? Then now, now how we walk with this, with this partial differential equation? How we start, how we start to examine this equation? Well, what we usually do in the, in the world of partial differential equation is the following. I will take this equation and I multiply left-hand side and right-hand side of this equation. I consider L2 scalar product with a function, with a function u. And I will look at the result. When I do the following, I have to use three or four, the following elementary, elementary relations. First watch. Firstly, u minus u minus Laplacian u, scalar product with u, L2, right? This is nothing but norm, norm h1 of u squared, right? So this is, this is well just easy to check and this is, and this is absolutely well known. Secondly, if I consider scalar product of Laplacian u, of i times Laplacian u with u, this equals to zero. Well, maybe just let us, for example, check this one. Why this is true? Because look what is written here. This is real part of i times integral of Laplacian u, u bar dx, right? Because scalar product with u, this is, I have to multiply by u bar and integrate, right? But then I integrate by parts inside and this is minus real part, real part of, minus real part of i times integral of Laplacian u, scalar product with Laplacian u bar dx, right? This is real quantity due to i, this is pure imaginary quantity and this is zero, this is zero, right? So this is, this is why this is true. Very similar, very similar if I take, if I take scalar product, scalar product of nonlinear part here, the result will be the same. If I take a scalar product of the nonlinearity, nonlinearity with u, this is again zero, for just for the same, for the same reason, for the same reason, right? And finally, finally if I take, if I take a scalar product of dt of u with u, then this is nothing but one half, one half of norm of norm u zero squared, right? This is easy calculations, well known. But then I explained, I explained to you just everything what we have. I have analyzed all terms we should have here. Now I erase, erase this reference, I will always, I will always refer to them for, for missing details. For missing details and for more, for more comments. Now, therefore, when I, when I multiply solution for this equation by, by u, and if I take and take out this relation, I will, I will arrive, I will arrive at the following, at the following identity, look. One half d over dt norm u L2 squared, plus norm u, norm u H1 squared, plus, plus equals to, equals to scalar product, scalar product of, of, of eta of t, eta of t with u of t, with u of t, right? This is called, this is called balance of energy, this is called balance of energy, because if I integrate this relation from zero to t, then look what I get. I get then one half norm u of t L2 squared. Here, arrives in the right hand side, arrives norm, norm u zero L2 squared. This gives me, this gives me integral from zero to t of norm, of norm shows norm, shows norm squared and the right hand side. So this is, this is second term from here and also plus, and also plus integral, integral from here, integral from here, from zero, from zero to t. Integral of, of scalar product of eta of t of u of t of t dt, right? And very often in many physical settings, this is energy of the field, which is of the field u of tx. This is, this is strictly positive quantity, obviously, right? Therefore, we have, we have strict, rather strong a priori bound for the, for the solution of this equation. This is a good hint that the equation which we are, which we are talking about, which we are talking about, this is a good equation. This is a, this is a good equation and, and this is a good equation indeed. But, but, but for make this equation good, I should start this equation with, with smooth initial data. L2 and even H1 and even H1 is, is insufficient. How to understand smoothness of initial data, which is sufficient to walk with partial, comfortably walk with partial differential equation like this? Well, this is given by the following, by the following well-known fact concerning, again concerning so many spaces. And this fact, and this fact is the following. Let us take any two non-zero integer numbers, non-zero integer numbers, two non-zero integer numbers, right? Let us consider, let us consider the mapping. Let us consider the mapping f. Let us consider the mapping f, which maps so belief space Hm, nothing to do with this m1, and m2, to itself. To itself, right? And which maps function u of x to u to the degree m1, u bar, u bar to the degree, to the degree m2, u bar to the degree m2, right? And then this is a fact, which partially explains why so belief spaces are so important for partial differential equation for the rest of analysis. This fact is the following. This fact is the following. That if m is bigger than half dimension, bigger than half dimension, this mapping is continuous. This mapping is continuous. This mapping is continuous. This mapping is continuous. This mapping is continuous. just references can be found everywhere for example in this in this our books and even more even more also also it is it is analytic and even smooth it is analytic it is analytic it is analytic and smooth and smooth well you see so I have to use the notion of smooth mapping from one Hilbert space to to another I understood smoothness in the sense of fresh air I do not want to give definition to discuss it this is this is more or less useful so I hope that you know it if you do not know it or that I can say only the following I very soon rewrite a question in this abstract form one and you can always think that all Hilbert spaces which is involved in my discussion they are finite dimension and therefore then this is usual smoothness and usual and usual analyticity all these topics on smoothness and analyticity between between Banach and Hilbert spaces is classical can be found in many books but I think that maybe the best reference if you can read French or Russian this is a book of the book of Henri Carton the book of Henri Carton which is called which is called calculus calculus differential calculus calculus differential calculus differential so this is 1967 in French there exist Russian translation I think in English translation do not exist but but any case I repeat that that this is fundamental notion so if you know it for me for many other book this is good but just one just one one essential one essential one essential remark remark and comment first first just notation which we use always notation notation the following for a banach space B for a banach for a banach space B for a banach space B I will denote by no V I will denote by B V B V of air the closed ball of radius air in this space which has centered the region that is to say that is collection of all points in the space V such them the norm of this point is bound is bounded by air I will use this notation always but it is rather it is rather common notation so it's right then one additional agreement one additional agreement which will simplify which will help me a lot just it will it it will be easier to talk under this agreement so look let us consider my path from one from one banach space B one to another to another banach space banach space B two right let us assume first as this map is is continuous then if this space is a finite dimension then restriction of the map F to any ball B VR is bounded right in infinite set and this is not true this is not true so my additional agreement that all continuous mapping from one banach space to another I will talk about they are bounded on bounded on bounded balls we can easily check that the mapping which which which arrived in my construction they're just like this and all transformation which I make with mapping they also preserve preserves this quantity also second part of agreement is that if if F if F B one B one to B two B two is B two is CK smooth is CK smooth in the sense of ratio the sense of ratio right then I can consider differential differential K of F at any point you right you know that this is a poly-linear mapping B one times times B one K times to to B two to B two right and then my my my my agreement is that if if you belongs to ball of radius R B VR K and I will take here L where L is where L is where L is from 0 to K any number any number from 0 to K right my additional agreement is the following that for any derivative like this if I take any point any point from a from a ball in the space B one of radius R right then the norm then the norm of this differential differential is bounded by a constant which depends which depends only only on R only on R right only on R again if this space is a finite dimensional this is obviously true this classical if this is a space of infinite dimension in general this is not true but all maps I will talk about this possesses this additional additional property so it is easier for me to postulate this from the very beginning then then next to next to check it check it check it check it every time every time then now look what happens so if I take as I explained if I if I take if I take dimension D dimension D where where I am sitting it was somewhere here right that X belongs to belongs to D dimensional talks so D dimensional periodic boundary condition right if D no if if if if what what are my letters okay if M is bigger than half half of D then this nonlinearity defines continuous and analytic mapping from the space in the space H M it means that it is good to study this equation the space H M and this is what I'm going to do so then indeed equation is well post in the space H M if degree of nonlinearity is not too big compared to compare to D and this is given by the following theorem by the following theorem by the following theorem well for example you can find this in in my paper with who you in maybe with who you need proof just in many different places also whatever now watch so let we have a high regularity let just I'm choosing okay let let M let M bigger than bigger than half of okay okay let it be like this then if if the degree D of the space is one or two one or two then air maybe any air air is any air is any air is any natural number there is any natural number second agreement if D equals to three equals to three then then then just I have to assume that they just air equal to one so in three-dimensional case this this must be u squared this is maybe one of the most famous nonlinear nonlinearities which which happens in in mathematical physics so under under this under this restrictions it is it is well known it is this equation is is well post it means the following that M is bigger than half of D if initial data is taken from the space H M H M right and if either now I I in this theorem I assume that either is just is just a deterministic function is a function which function which belongs to L 0 u t u t H M u t H H M minus one what does it mean it means that it means that if I write if I write this function as as I said as the curve u t in the space in the space H M right then this is then this is a curve with this with finite L to norm right it means that integral integral from 0 to T of norm u of t u of t M squared it is finite it's fine so this is if initial condition belongs to the space H M if the right hand side is just L to curve L to curve well in the space H H M minus one right then there exists unique solution then there exists there exists unique solution u with the following properties such as that with which properties first u is u is L to function value L to function to unit smoother so valued in the space H M plus one also this is a continuous curve in the space in the space H M in the space H M right and do you over DT do you over DT this is this is L to curve L to curve value in the space in the space H H H M H M minus one H M minus it so at least under some restrictions on the degree of nonlinearity in terms of dimension of space variable this equation this equation is this equation is well posed this equation is well posed now now I after this with this theorem enhance I will consider I will consider equation one as an equation in the space in the space H M which I will denote for short as H as H right and I will and I will write it like this because of this notation from now on I will denote norm number M simply simply simply like this simply like this simply simply like this right and then as I said after with all these preliminaries I will rewrite complex Ginsburg Landau equation in this form where L of u is a friction which I added to my equation f of u is the rest and just eta and just what I have what I have in the what I have in the right hand side is a force which just in addition we wish in addition depends on the on the random parameter omega a lot of other important nonlinear partial difference equation I cannot say it I cannot say all of them but quite a lot can be written can be written in the form in in the phone one in the phone one right now so I will talk equation in the phone one now I will assume about equation one something what I will assume is is weaker than what I know from this theorem right so for so so the this equation satisfies all this all the restrictions which are imposed on the equation one now so which restriction heavy heavy to impose on them on on the on on the equation one well they are rather innocent they are rather innocent nothing nothing will be really technical in what I am talking to you about but maybe maybe the difficulty is that objects are of really different of really different regions so I I assume that's the assumption I assume that the linear operator L L H to H H to H is a self-enjoyed linear operator is a self-enjoyed is a self-adjoint linear operator self-enjoyed linear operator linear operator the unbounded of course unbounded unbounded right since this is a self-enjoyed linear linear linear operator I can consider in the space Hilbert basis which is formed by onion functions of this operator consider consider the Eigenbasis consider the Eigenbasis consider the Eigenbasis consider the Eigenbasis Eigenbasis fall formed by formed by function Fij such as that L of Fij equals to lambda J Fij right I assume that the norm in the space H m of Fij equals to one normalized right and each lambda J is bigger or equal to one and also lambda J goes to infinity goes to infinity right this is my this my my condition on this one on on the operator of course this condition is definitely fulfilled fulfilled for this operator right because if L is this operator then the Eigen then the Eigenbasis then the basis of Fij this is just complex exponents we can write them explicitly but I prefer to to have them written like this parameterize them by by one by one single single parameter J right rather than rather than by a vector index okay okay so so so this is so what I'm going to talk about now I'm going to talk about abstract equations of the form one such that such that such that the operator L is self-enjoyed operator with the following with the following property with the following property and such that the following restrictions restrictions additional restrictions are fulfilled first restriction second restriction second restriction for any for any initial condition from the space H and for any eta for any eta which is L2 curve valued valued in the space in the space H right you see here this theorem allows me to take to take right-hand side L2 curve valued even in in space of lower smoothness I take I take right-hand side even smoother even smooth yes so if so then there exists unique solution there exists unique solution there exists unique solution there is a solution as the solution is a continuous continuous curve in the space in the space H in the space ages so so I I just this regard this regard this part of information about my solution which I know and I only remember remember this part it is enough it is enough for my for my purposes for my purpose right okay now now just now just okay now my force my force depends on the random parameter right if I have if I have a force depend on the random parameter in a random parameter this is this is simply called then just language for this kind of force is a this is a random process so we say eta equals to eta omega of t curve in the space H but this curve in addition depends on a random parameter so omega is a random parameter what does it mean that omega is a random parameter it means that I have the usual probability space omega omega fp omega fp right and and make off and make off depends and then and make off depends depends on this on this on bet of course I have to I have to assume something about about this force about this force about the about the force f well just to him to state the restrictions which I have to you to assume let us let us decompose let us decompose the force thats the force in the basis let us buy close the force in the basis let us decompose the forcing the basis. I will call this f, maybe I will somewhere later refer to this letter. Ah, yes, and just about my notation. So, star means a temporary name. I will use, if something relation formula is called star, then I can use only during few lines, few lines, right? One here is a temporary name. I will use this equation one always. f also is temporary name, but it is old. It is very innocent. So, look, I decompose eta omega of t. This is a curve in the space h. Let us decompose this in basis. Summation. Summation decompose in the basis fj, write this basis. Here comes bj, bj eta j eta j omega of t, eta j omega of t. So, look, I decompose a random curve in the basis. Therefore, of course, coefficient of the composition number j is something random. I write something random as a product of scalar number bj with some random process eta j. Why so? Why so? Because this bj tells me how big the coefficient j is. So, all these random processes will be of order one, but bj should convert to zero, to zero rather quickly. So, look, first, bj are non-zero, all of them. And second, this is assumption b. Assumption b, that the summation of bj, bj squared, it is finite. And the bj, they are real numbers. And I will denote this sum b. So, this is so I decompose the force in basis and these are restrictions which impose. Now, who are these guys? So, here I have random processes. Random processes, you see, for my cause, this will be innocent. I never assume some complicated properties or structures of random processes which I use. So, this is simply a curve which depends on random parameter. So, a curve I have random processes, eta 1, omega, eta 2, omega, eta 3, omega. There are infinitely many of them, infinitely many of them. And I assume that they are independent, identically distributed random processes, random processes, random processes. What does it mean once again? It means that I have one random process, eta omega of t. This is the random process, right? This is this random process which is valid in real numbers. And then I take infinitely many independent copies of this random process. These are processes, eta 2, omega, eta 3, eta 3, omega, etcetera, right, etcetera. And so, this is my random force. So, this is a battlefield. So, this is what I am going to talk about. Of course, it is not enough. I will impose more restrictions. But very briefly, summary is the following. I am talking about evolutionary equation in certain Hilbert space H. This evolutionary equation has linear path L of u. It has non-linear path F of u. This non-linear path also in its tone may have linear components, why not? And I have here right-hand side, which depends on the random parameter. I assume that the operator has some nice spectral properties, you see. It has Eigen basis. It has basis in the space H, which formed by Eigen functions of operator of size 1, right? And this and Eigen values go to infinity. And my equation is well-posed in the fallen sense for any initial data from the space H. From any right-hand side, which is L2 curve in the space H, I have one solution for my equation, which is continuous curve in the space H, right? Then I decompose random force in the basis. And I assume that the coefficients of this decomposition have the following form. Some number dj, which is small when j is big, you see this. Times a scalar random process, which are and these random processes, which are, they are independent copies of some single process one. The last but very important thing. Summation here goes from one to m, where m is either finite or infinite. Either finite or infinite. And then we have two rather different questions. When m is infinite, I am talking about non-degenerate force, right? I have, I have noise in never Fourier coefficient. If m is finite, my, my, my, my force, my right-hand side, my right-hand side is very degenerate. Second question is significantly, more complicated. And then watch what happens. I can, I can consider this system one. I can for a moment forget about this random parameter omega. I can regard system one, system one as a control system, is a control system, right? So if I do so, then eta of t is my control. And I have to choose this control on some special way to achieve one purpose or another. I have restrictions on this control. And I have to prove that with this, under these restrictions, I can achieve the goal which I wish, which I wish, right? And then, of course, this control may be a flow dimension. So m, m, m be just three, right? Or full dimension. m may be equal to the dimension of my space. Again, I repeat that you can regard this as a, as a finite dimensional system, right? This is, this is setting of the problem in terms of optimal control. But now what I do, I consider controls which I, which I have in the right-hand side. Not as something which I want to build, right? But as a collection of curves such that, such that a measure is on this, on this collection of curves. And then I am trying to, what, trying to understand what happens, what happens with solution for this equation, right? It turned out and this is, this is more or less, this is understood during last 20 years and to a big extent during discussions of, of, of Armenian Shrikyan, myself and, and Andrea Grachev, that, that these two settings are very related. That to understand this stochastic system, we must understand, should understand better, shouldn't understand better control system. And in some years ago, Andrea Grachev they proved for us using tools of optimal controls, some serums which allow to move forward in this class of program. And so what I am going to talk now, this is the development of this program. Now look, we have another half an hour. Let us make break and, break means, just, no, no, break means questions, questions, questions. Sure, this is, this is a very good time, maybe when I, when I have to repeat something for you. Questions. Is the, is, is the setting is clear? Look, I have perfect nonlinear partial differential equation. It is perfect in the sense that, in the sense that you see that, that for any right hand side, for any right hand side, I have a unique solution for this equation, right? Now I take this right hand side, depending on the random parameter. And my task to understand what happens, what happens with solution for this equation when time goes to infinity. If you do not like partial differential equations, regard this as an ordinary differential equation. I have here right hand side, which may be regarded either as a control, either of finite dimension or full dimension, right? Or, or, or something, something, something random. And then I am trying to understand what happens when time goes to infinity. If this is control, that is, it's my freedom to choose this control in the right hand side in such a way that when time goes to infinity, I achieve some goal. But if right hand side depends on the random parameter, then I am, this is not my free will just to choose it, right? Simply but I know that, that randomness, so, so, the God makes for me the choice how to choose the right hand side. And still, we will see that for a big class like this, we can't understand what happens with solution for this equation when time goes to infinity for, in this, for the, for, for just averaged characteristics for solution for this equation. Okay, can I, questions, questions? Yes, please. Yes. Yes, yes, of course, of course, of course, of course. Yes, yes, yes, of course it does because this is true, this is independent, this is my, I have no use, okay, so you will see, look, look, if there is no right hand side here, no right hand side that I have equation where coefficient doesn't depend on time, then this, then I have flow map to this equation, it is really semi-group, semi-group in this space, in this space H. Now I have added random right hand side. So the whole message will be that in this case we also have a semi- group. This is a mark of semi-group, this is what I will very carefully introduce and just will, will, will explain, will explain later on. So because technique which I am going to use, I will use nothing from partial differential equations anymore, that's all, that's all, right? What I will use, I will use some basic, basic techniques of Markov processes but this here I will explain everything. But as I said I cannot, I assume that you know independence of random variables and what is very important that I will systematically, will play with the fact that, that distribution of random variable is a measure. So this is questions. The more questions you ask, the more you slow me down so then if you, the better you will understand what I am going to talk. Because I have no, I have no program just to tell you all these pages which, which I have written. The more, much more important is, is that you understand more? Questions? Yes, yes, yes, please. Yes. Yes. This is correct. This is correct. This is correct. Yes. It, it, it is diagonal. It is, it is diagonal control, diagonal. You see physicists write papers like this. Physicists, this is, this actually all this was started by physicists. This is, this is a reasonable model. This reasonable model is I will, I will, I will, I will comment on this. I will comment on this. So you say this is rather new. So, so physically some, I will keep imposing restrictions. At some stage I will tell you that this restriction is not very physical. It will be better to, to remove this. Simply since our background of myself and my collaborators, it does not stochastic. So we probably do not know, do not, do not have enough stochastic skills. So we, we have to use theory of Markov processes and I will explain you everything. But this is physical, right? So physicists, physicists bite, bite easily. I just, we often discuss it with, with physicists. Questions, questions, ladies and gentlemen? Excuse me, say it again. Yes. Oh, yes, yes, yes. It is, it is very well known equation. You see, what is, what is strange with partial differential equations? There are no single book. I cannot tell you. Go in, look at this book and you will find everything about this equation. You should Google, you should, you should use references. You see, field is very vague. So maybe there is a book about this. Maybe, maybe, but, but it's just, maybe, but it's just, no, it is, it is classical, partial differential equation. Classical. Even look, what is, what is, you see, for, for partial differential equations, if I put in the right-hand side zero, this equation is as complicated as equation when I put in the right-hand side the fourth, like in this theorem. So, for partial differential equations, for the fact that solution exists and doesn't exist, they do not care if in the right-hand side is zero or not. But you see, but theory of partial differential equations, only, this is like, for ordinary differential equations, you see, all these books on partial differential equations, which we have now, all these extremely complicated papers, which are written in very, in very, very good journals, this is mostly, this is usually just a particular theorem. That for one or another partial differential equation, we have this, we have, we have, we have a unique solution. We have a unique solution, right? The question, what happens with this solution when time goes to infinity, what are these properties? It is addressed in maybe five percent of all, of all publications. Some people in the Zala are making, making very serious progress in this direction, you see. So, all these books about this is theorem. There exists a unique solution. But physicists do not care about this. We learn them. Okay, they, I, even I remember you said, I, I am old enough just to, to remember that, for example, in fifties, when we start to explain physicists and people from mechanics, that the issues that partial differential equations have solution or have not, it is really serious issues. They were laughing. All equations from physics, they do have unique solution. But now they well understand that if you take Navier-Stokes equation in dimension three, that this is our mathematical worry, that if this equation has unique solution or not, this is very physical worry, very physical worry, right? But still, roughly like that. So, mathematicians prove theorem that non-linear equation has a unique solution. Physicists, physicists mostly, mostly are concerned only with what happens with solutions when time goes to infinity and with some, always some parameter goes to zero, right? Can I, can I, can I, can I ask me something for the, for the benefit of other people? Yes, yes. This is probably your, you're, you're just a bit late. So, you see, so this is, this is, what is my goal? What is my goal? You see, so this is, if I want to tell that I want to understand what happens with u of t when time goes to infinity, it is too much, because you see this. In deterministic settings, this is only, only God knows. This is extremely complicated. Only for you, very easy equation, we can answer these questions, right? Or maybe for very special subclass of solutions, right? But physicists do the following. They consider, they consider some physically important function like energy, like energy, right? They, they consider f of u of t. So, what they really care about? This is not, this is what are asymptotical features of some relevant quantities, relevant characteristics for solutions, like energy, like inputs, like, like, like, like something else, right? Again, this equation too complicated. No, no chance to answer, no, no chance to answer this, to answer this. But now we start to profit from the fact that my, that my solution depends on the random parameter. Then physicists, in fact, they love this statement very much. And that they, they are saying, that they are saying that we want to consider what happens with the mean value of this, of this object, right? And then they knew this, starting from Einstein, I guess, here. But it is, for us, it is impossible to read his papers. But after physical translation, we understand that some questions and some ideas were in his papers, all the right, right? So, they said the following, very often, what happens in the following, that there is this unique measure, mu star, in this space, in the functional space, such that the limit is given by the integral of the functional with respect to, with respect to the mu few. And I will tell you very general result, which really guarantee, explain this is true. This result is based, there will be one condition there, one condition which is not innocence, some property of corresponding controllable system. But now we have so good experts also in this audience, so for big class of equation, we can check this. Okay, so this is my goal. Some more questions. If, okay, so you see, non-linear Schrodinger equation in any dimension can be written like this, you understand? That's what this is, this is, without this, this is what they call stocks equation, linear part of non-linear Schrodinger equation. This is non-linearity of an LSEquation, which is early non-linearity, it can be written like this. But, well-posseness, this restriction, that there is only one solution. It can be proved only for dimension one or two. In dimension one, this is a Burgers equation, in dimension two, this is, this is Navier's stocks. And yes, in dimension two, this, everything, what I will, what I will tell you about applies to, applies to, to, to, to dimension hydrodynamics. You see, if, if some, some fortune expert from partial differential equation proves that, proves that in dimension three, Navier's stocks equation also has unique solution in this sense, then immediately our results are applicable to, to, to three. Because, because the condition for optimal control, which has to be checked, it is checked already by, by, by, by Professor Agreshev and his, and his followers and his collaborators. You see? So, only one small thing, when some guy will get one million, one million dollars for this, for resolution of this problem, is we immediately start to apply our, our theorems. So, in dimension three. So, more questions. Okay, so, so just, just first I am here around. So, if you do not understand something, ask, ask me a question. Also, Huylin, please stand up, please stand up. So, this is Huylin Zhang, who is the second author of this paper. So, you can also catch him and, and just ask, ask questions him. Okay? And also pay attention, because this is new and completely open field. There are millions of open problems here. Some of them are extremely complicated. I can, I stay, maybe not extremely simply, I, I can, I cannot do them. Okay? But plenty of problems here. So, so, so pay attention. Okay, so now let us slowly move forward. So, we now, now we know the problem. Now, now we know, now we know the object. Now we know, now, now we will start to, we will start to, we will start to put more restrictions and we'll start to, start to specify, start to specify the, start to specify the, the random force. So, look. So, as I said, as I said, it's a very big difference. Here, either M is finite or just M is infinite. So, therefore, it is clear that, that it is important for me, the following subspace HM of the space H. This is the subspace where the, where the noise is. This is linear envelope of the, of the vectors phi 1, phi 1 through, phi 1 through, phi 1 through phi M, right? You see? It means that my noise always sits in this space which may be of finite dimension, of finite dimension, right? This, this, this, this space will be, this space will be, will be important for me. This space will be important for me. Now, now more restrictions. Now more restrictions. So, so since we are not expert from, from, from the probability series, there are four restrictions which we impose. They are, they are well just, they are really not sophisticated. They are not sophisticated and they just can be, can be understood by, by, by the not experts. So, they are rather human restrictions. So, look. So, now restrictions on the, on the, on the process eta. Look. As I said, Fourier coefficients, eta 1, eta 2, et cetera, they are, they are independent, identical compass of the same process eta 1. So, which kind of process is the process eta 1? So, restrictions on the process eta 1. Restrictions, restrictions on the process, on the process eta 1, on the process eta 1, on the process eta 1, omega of t, eta 1, omega of t. Okay, the most, the most important restriction which is the key for everything is that it is bounded, is that it is bounded. Eta 1, eta 1, omega, eta 1, omega of t is bounded by 1. And this is uniformly bounded for all omega, for all omega, for all t. You see, from one hand, of course, all forces in nature are bounded. Nobody ever saw unbounded force. Everything is bounded. Speed is bounded, force is bounded, mass is bounded, right? Nothing is unbounded in the, in, in the universe. On the, on, on the other hand, on the other hand, many of you heard and probably some of you know that often people, people take for the right hand side in non-linear equations like this white noise. White noise is a time derivative of, of winner process. This is something which is firstly not easy to understand to people who are not, who are not trained in probability. And secondly, this is something extremely unbounded. So, this is, so this is, this is different from what, from what people often to consider. But our advantage is that firstly this is physical and second with this advantage we can, with this condition we can create rather, rather complete system, rather complete theory of what, of what happens here. So, all three coefficients are bounded. All three coefficients are bounded. And look from here, this, this plus the condition b and condition b is, is still here, is still here. Implied, right? There's a norm of the process eta omega of t, norm of the space h, of the space h squared. It is bounded, it is bounded by sum of b j squared, which is, which is just b. And this is also bounded. So, we are talking for the case when, when the random force which we apply to our system is bounded. Every three coefficients of this force is bounded. Is bounded by, by in fact b j, right? This is bounded by one, this bounded by b j. So, b j simply measure the maximal possible size of three coefficients, right? And this implies that, that this, this implies that the force, that the force is bounded in bounded for all, for all omega, for all omega, for all omega n t. Now look, for l equals to one, two, et cetera, et cetera, denote, denote, denote by, denote by g l, segment of time. g l, segment of time from, from l minus one, l minus one to l. So, what happens? This is, this is the, this is the time x t. This is space h. We have here, I chopped my time x to the segment, to the segment of length, of length one, of length one. What I'm going to do? I'm going to, to understand first what happens in this segment. Next, what happens in that segment? Next, what happens on that segment? So, I will move forward from zero to infinity with step one, with step one, right? So, because from, from quite for, for many techniques, it is advantage to have a discrete time at not and, and not continuous time, and not continuous time, right? Then, then just consider, consider processes. Consider, consider processes. Consider processes, which is the process eta one, restricted to segment number r. So, this is segment g one, this is segment g two, this is segment g three. Let us restrict process eta one, which is, which is, what, what, what I have in front of efficient number one, to the time, segment, to the time, segment g r. g r, where r, where r equals one, two, three, et cetera, et cetera, right? Right? And so, what does it mean? I have a process, I have a process, right? Then, I, then I consider segment number r, I consider segment number r, time segment number r. And I consider the value of my process, when time, when time changes in this, in, in the segment, in the segment number r. Now look, now look, shift in time, shift in time by, by r minus one, shift in time, shift in time by, by r minus one, by r minus one, I regard, I regard this restriction, I regard the restriction of eta one to the segment g r, as a function, as a function defined for t from zero, as a function defined for t from zero to one, as a function, as a function of t, of t belonging, belonging to the segment, to the segment g one, this is segment g one, right? And from now on, I will denote the segment g one as g, as g, right? So this is, this is nature. And now, second restriction, second restriction is p two, second restriction is p two, that the, the processes, the processes, the processes, which is eta one, restricted to segment g eta one, eta two, restricted to segment g two, eta three, restricted to the segment g three, they are, they independent, identically distributed. They independent, identically, identically distributed also, right? So I have some randomness from zero to one, then I have the same randomness from one to two, same randomness from two to three, two to three, okay, et cetera, et cetera. For example, if, if this, if the right hand side is just a white noise, then every four-year coefficient which I have is, is a white noise, then, then, then this condition is badly violated, but, but this condition holds. So the processes which we, which we keep in mind and which we have good examples of, this is something like bounded, bounded analogies, bounded analogies of white noise. And now, so now finally, now finally condition, condition number three, condition number three, condition number three, that the process, the processes, or one process, G1, the process which I, which I have, the process which I, which I, which is obtained by restriction to the process eta one to any, to any, to any segment GTR, it is, is, is not degenerate. It's not degenerate in the sense I, I, I will explain later. Non-degenerate, non-degenerate. There are some restrictions, but these restrictions of the fallen nature, that something which I will need later holds. Holds, I will explain later what I, what I need. I will explain later what I need. But, but what is important here that, that of course there are, there are good classes of processes which, which, which satisfy all this restriction. This is called the process. This is random hard series, which is also the process, the process of red noise. The process of red noise, of red noise. And see, see our paper. See our paper. See, see my paper with, with shirikyan or shishyan, and then see my paper with, with Huilin jhan which here. So there are, there are examples, there are, there are, there are, there are good examples. Interesting warning, interesting warning that for all examples we should have using, we should have using so far, the process, the processes, the process eta one, eta one of t, eta one of t as the function of t is not continuous, as a function, as a function of t is not continuous, as the function of t is not continuous. As the function of t is not continuous, is not continuous, is not continuous. Well, physicists do not object against this continuous. Here, the biggest problem, which is open now, the biggest and very serious problem, is to prove the main result, which I will present to you soon, without the assumption two. Without the assumption two. This is rather strict assumption. This is rather strict assumption because I want to examine my solution from zero to one, from one to two, from two to three. I want to move, I want to move with the step one. Physically, other restrictions are quite natural. Physically, for example, if the process is bounded and what they call mixing, mixing, then this will be perfect, but just we cannot do it. But so immediately, so if this from the theory which I will present to you, remove the condition of, you see that we have to move to the future with the step one. With the step one, this will be serious improvement of what is going on. Right? What is going on? Okay. Now, 10 minutes are left, and I use these 10 minutes for some, for some repetitions from the theory of probability. So actually now I will give you some basic, I will recall you basic notations from the probability theory which I will use. Which I will use. Right? After this also, by the way, what will happen on the next lecture? This will be essentially basics on the theory of Markov processes. So now my title is for the, what for this very one or two pages which I'm going to talk to you about. These are some notations from probability. Some notations from probability. Some notations from probability. So they innocent, nothing special. You can find it just in any book you wish. Personally, I like the book by Albert Cheryayev. Albert Cheryayev probability. Cheryayev probability. There are many additions. Probability. Probability. Just definitely everything what I can, what I can ever, what I can ever use, you can find in this book. In this book. Right? First, one object, one object which is not, which is not so to say classical object, object from the theory of probability, is that I will consider, I will consider random objects in some special spaces of functions. Now what is very important is that all spaces of functions, they are not necessarily linear spaces. There could be only parts of linear spaces. All spaces of functions I consider. All spaces of functions. Of functions. Of functions I consider. I consider will be, will be Polish. Will be Polish. What does it mean? Will be Polish spaces. Will be Polish spaces. Polish spaces, this of course to honor, honor tremendous contribution of Polish mathematical schools in sorties to this field of mathematics and especially of Stefan Banach. So a space is Polish. Okay, so a space, well just a space. There are, there will be examples. A space is called Polish. A space is called Polish. It's called Polish. If it is, if this is, if this is, if this is complete separable metric space. Complete, complete separable, separable, separable, separable metric space. Metric space, metric space. For example, any separable Hilbert space H is a Polish space. In particular any space Rn is a Polish space, right? For example, segment 01 is a Polish space. This space is a Polish. Complete, separable, complete separable and metric, right? On the contrary, open segment 01 is not. Is not, is not. Why it is not? Because it is not complete. Because, because, because point one is a limiting point which doesn't belong to it. If I have considered Hilbert space which is not separable, for example, for example, L2 space of almost periodic function, right? It is also, it is also, it is not Polish. It is not Polish, right? So this is, this is extremely convenient to have Polish spaces because from the, from the point of view of tools which I am going to use, all Polish spaces are good. When we, when we stay in Polish spaces, we are safe. We are safe. Right, so then, okay. Then, then basic stuff, okay. Then what I will, what I will use, what I will use in this Polish spaces. Let M, let M be a Polish space. Polish space, Polish space, Polish space. Then, by B of M, B of M I will define the sigma algebra, Borrel sigma algebra, Borrel sigma algebra, Borrel sigma algebra. So if I will set, if I will set that certain set is, is measurable, it means that it is, it is, it is, it is Borrel set. If you do not feel yourself so comfortably with sigma algebra and with, with Borrel set, you can simply keep in mind that just every set is Borrel. It is not true, but definitely all set which will appear during our constructions, they are, they are, they are, they are Borrel. They are Borrel, right? And also, P of M. P of M, this will be set, set of all, set of all Borrel measures, set of all Borrel measures on the space M. Borrel measures, Borrel measures on the space, measures, measures on the space, on the space M. Now, something extremely beautiful for everybody with taste for analysis happens. So look, let us take any Polish spaces. There are clearly trivial examples. When a Polish space has only finitely many points, trivial and not, and not interesting, right? Second trivial example, when Polish space is a countable space. For example, set of all integer, set of all integers. It's a Polish space. I can, I, I can add in the list set of all integers, set of all natural numbers. But they are countable. They are not, they are not interesting, right? So if I take a Polish space, which is not finite and which is not countable, then, then the space of probability measures. Look, if I take two Polish spaces M1, M2, M2, which are not, which are not countable, not fine, right? And then the space of probability measures on the spaces are in fact, isomorphic, isomorphic, so space of measures on a, on a Polish space is a metric space which is as universal and as important as a separable Hilbert spaces, right? So we know that for example, functional spaces, space A2, L2, and symbolic space HM, of course they are different, but both are separable Hilbert spaces. From some point of view, this is the same space, right? Therefore, quite similar, as soon as we sit on Polish spaces, all space of measures, spaces of measures and these spaces, they are the same. This is a remarkable property. It took 30, 50 years just to prove this result. It was done during 30s, 40s, and 50s with essentially with Russian contribution, with losing school, and with, I think, Rohlin proved the final theory. Now proof is not at all that, but it is really a remarkable result. It is a really remarkable result. Therefore, but you see, then after this, you should not be surprised that space of measures, it is extremely rich in properties. There are some properties in the space of measures which are much better than what we are used in other, in other, in usual functional spaces which we work with. You see, controversially, controversially, from some point of view, space of measures on a Hilbert space is easier than Hilbert space itself. Because it possesses some extra, extremely beautiful properties, right? Therefore, in this my stochastic setting, I will talk about evolution of measures. And in some sense, evolution of measures is easier than evolution of real solution in Hilbert space, right? So it's a bit gate, it's another point of view. So my time is almost up. Just maybe one more notion. This is, because you see, I want to, this is extremely important, so I will, I will better repeat it. It is innocent, but it is also, it is also very deep. First, first you, first you remember, you remember that of course I have this probability space, omega, omega Fp, omega Fp, right? Let us take also any, any, any Polish space M. Consider, consider, consider any, consider any measurable space from omega to M. Look, here I have Barriel sigma algebra, here I have sigma algebra F, right? So consider any measurable map, right? Measureable, measurable, okay, measurable maps, measurable, measurable, measurable. X-size, X-size are called random variables, are called random variables, random variables, random, random variables, random variables, right? So random variables is nothing but a measurable mapping defined on my basic probability space, which is with values in some Polish space, Polish space with sigma, with sigma algebra. Right, and finally, what is extremely important? What is extremely important, this is a link. This is a link between random variables and measures. Random variables and measures, they are extremely closely connected. Now look, a law, a law, a law, a law of air, random variable. Air v also be random variable. XI is a measure, notation is d of XI, this is rather common notation. This d of XI is a measure, is a measure on the space m. This is burial measure in the space m, which is defined on the following way. This is a measure, and this is a measure of the space of any set Q, equals to the probability that XI belongs to Q, that XI belongs to Q. This is extremely important. This is extremely important. For example, because very often, very often if we, okay, so you see, so you see, so just, well, just, if Q is any burial set, right? Is any burial set, right? Therefore, we can think about measures in terms of random variables. We can think about random variables in terms of measures, right? So there are maybe, this is extremely important, right? So I should give you another one or two notations, but my time is up. I will better start to repeat this and move forward in the notations in my lecture, in my second lecture, in the afternoon. And then after this, after some more notation, there will be preliminaries on mark of processes. Also naive without any, without any, anything technical. So we will see the point is that equation like this defines a mark of process, which is stochastic object, and we can start to use stochastic tools to start this stochastic object, and this finally allows us to achieve the progress. Okay, just that's all. So this is the end of the lecture.