 So, today we will start, now that we have the fundamental tools, we've defined formally the notion of probability measure, I will explain how we will apply this language and this formalism and this techniques in the study of dynamical systems. So if you remember, a dynamical system is a map on a space, so it will be useful for us to think of X in general as a compact metric space, which means that the X naturally has a topology given by the metric and a Borel sigma algebra, which is the basis on which we will construct measures for this space. Then we have a map, which generally will be at least continuous but not always. And if you remember one of the key definitions, one of the key ways that we used to study the dynamics of this map from a topological point of view is the omega limit set of a point, remember? So we define the omega limit set of a point is equal to the set of Y in X such that F and I of X converges to Y for some subsequence. So we have a space X, our point X and the orbit of X in our space and what this captures is somehow the asymptotic behavior of the point, right? If X is a fixed or periodic point and the omega limit set is just the orbit itself, if not it says what the orbit converges to. And of course the orbit may converge to some fixed point but it may also not converge to anything and be dense in the space in which case the omega limit set is the whole space. So now we want to address a question of the frequency of the distribution of this orbit in space. So this does not tell you where the orbit spends most of the time. So the orbit could be dense but it could be that if you just look at the orbit it spends lots of time in here and then it spends a few it is around here and then lots of time here and then a few orbits around here. There is a kind of time distribution of the orbit which is a more detailed description of the dynamics. And we will use the language of probability theory to describe this. In some sense we're saying what is the probability? If you weight some number of it and you just pick, you look at some random time, what is the probability that this orbit is here or here or here or in this region of space? Okay so I will explain how to formalize this idea. So first of all let us define a kind of probabilistic omega limit set. And the objects of the set would be measures. So mu in m, remember m is the space so m equals the set of all Borel probability measures and here this probabilistic omega limit set is a set of measures such that this average converges to mu for some n i, wait so I need to use j for some nj tending to infinity. So what is this object here? Remember what this is? This is the Dirac delta measure on the point fi of x. This is a probability measure. Okay this is the sum of these probability measures and this is the average in some sense of these probability measures. So what does this mean? This means that you look at the Dirac delta on the point x, you look at the Dirac delta on the point f of x, f2 of x, f3 of x, okay. For example, let's just like an example here. So for example i equals 0 to 2 for example of Dirac delta fi of x. This is just, sorry, 1 over 3. This is just 1 over 3 of the Dirac delta in x plus the Dirac delta in f of x plus the Dirac delta in f2 of x. So what is this measure? This is a measure, right? What is the measure of a set? For example, what is the measure of this set? With respect to this measure, 1 third, exactly, right? Or if you take a set that contains 2, it will be 2 over 3. If you take a set that contains all 3, it will be 1. It's a probability measure because the maximum total measure of the whole space is just the sum of these measures. If you take the whole space, then you take, you measure, you measure the whole space with this set, then this will give measure 1 to the whole space. This will give measure 1. This will give measure 1. These 3 will give measure 3. Divide by 3, you get measure 1. So this is a simple way. Remember, this is a linear combination of Dirac measures on orbits. I mentioned those the other time, okay. In this case, the specific points that you choose are points along the orbit. So that's how we start to introduce some dynamics into these measures. When you take n large, right, if you take a large number, then you are taking many, many of these Dirac delta measures and you're taking the average. And so this measure will start to be concentrated on the region in which there are lots of points of the orbit. So for example, if this point is spending a lot of time here, then this measure here will give a lot of weight to that region. And I will give some examples now, okay. And so as you tend to infinity, the question is does this probability measure, always remember, this is a probability measure. It is the sum of measures, but you average, so it's a probability measure. Does it converge to a particular probability measure? And we will discuss what this convergence means from a dynamical point of view, okay. But this is the definition. So first of all, this convergence is intended in the weak start topology that I defined the other day. So let me remind you what this means. So in the weak start topology, what this means is that the integral, so convergence is intended in the weak start. So what we have is that, so remember that the weak start topology says that the integral of phi with respect to this measure converges to the integral of phi, the mu for all phi continues. This is the definition of convergence in the weak start topology. And so just to put it in a form that is a little bit more, little bit less formal, notice that what this means, you're integrating this function with respect to this measure, right. If you remember, because this is a discrete measure, this is a measure that lives just on a set of points, right. So remember when you integrate, we did the example last time, when you integrate a function with respect to a measure that lives on a single point, that integral is just the value of that function in that point, right. So here you can just pull out the sum, so here you can just write 1 over nj times the sum i equals 0 to nj minus 1 of the integral of phi d delta fi of x. And this is just the integral in this point, so this is just the value of phi, so this is just the integral, okay, integral of phi. Sorry, I don't want the integral here. This is just equal to 1 over nj, the sum i equals 0 to nj minus 1 of phi composed with fi of x. So this would be very useful, so I'm just rewriting this convergence, so this says that this converges to phi d mu for all phi continuous. So this is the way that actually we will work with this notion because this is something that can be checked in various situations, but this is exactly equivalent to this, okay. So the convergence in this sense means exactly that the delta direct, the average of the delta direct converges in the weak start topology. So let's look at some examples of this probabilistic omega limit set. So an obvious example, suppose p is an attracting fixed point, okay, so we have our space x, we have a point p, and then for any initial condition x 0, it just converges to p, right, fn of it, well, initial condition f, converges to p as n tends to infinity for all x in x, or even if you just take a single orbit that converges to p. Then what are the measures that are in the probabilistic omega limit set of the point x? Direct delta in p, right? So this is exactly equal to the direct delta in p, and you can easily check that because if you look at the orbit, okay, from this condition, you immediately look, you look at, you take a continuous function, okay, and you evaluate the continuous function along the points of the orbit, and you take the average of these values, right, and what this orbit is doing is converging to the point p, right, so after a long, long amount of time in a very small epsilon neighborhood of p, there will be many, many points of the orbit, and the points that stay outside this neighborhood will be just some small finite number, and therefore the value of phi in this neighborhood of p will be very, very close to the value of phi in p itself because phi is continuous, right, and so because you're taking the average and most points will be close to the point p, this value will be very close to phi of p in most points, and that's what the integral, if our limit measure mu is the direct delta in p, then this is exactly phi of p here, right, so if you want me to write this down, indeed, I'll just write 1 over n integral i equals 0 to n minus 1 of phi composed with f i of x clearly converges to phi of p, which is exactly equal to the integral of phi because phi is continuous and because fn of x is converging to p, so this is a simple, fairly simple situation, yes, yes, probabilistic omega limit, excuse me, no, it is not necessarily the case that it's not empty, this will be one of the first questions we will ask, yes, so as you remember also in the topological omega limit one of the first propositions we proved was that in certain cases if the space is compact, if the map is continuous then it's not empty, and in this case we will discuss the conditions that guarantee that it's not empty and lots of properties, this whole course in some sense will be devoted to studying in the various situations what is the property of this set, yeah, so that's a good question, so in fact this is a good question also because what we will be very interested in is if it's not empty and if it contains a single measure or if it contains many measures, okay, and I will discuss, so this is where it starts, the notion starts diverging from the topological notion, okay, now I will discuss an example in which you might have several measures and I will also discuss that when you have several measures it's kind of a problem, it's not a nice situation, so in fact what we're looking for, what we will be looking for in the course is situations in which the probabilistic omega limit set has a single measure and I will explain why with the following example, so let's take the opposite extreme, as you know remember from a topological point of view we kind of had the one extreme contractions in which we have this kind of behavior and on the other extreme expansions, for example if we have x01 to 01 and then f01 to 01, given for example by x goes to 2x mod 1, expanding map that we've studied in some detail and if you remember this map as this graph looks like this, 0, 1, 1.5 looks like this and we did a fairly detailed study of the topological structure of these maps and if you remember we had two, we had a symbolic coding for these maps, i0 and i1 and we showed that if you take a point x and you iterate it and you can associate to this point x an infinite sequence of 0s and 1s depending on its orbit in i0 and i1 and most importantly we showed that basically any possible sequence of 0s and 1s there is a point that has that sequence as its sequence, so in particular we showed that there's dense orbits, so there exists x such that omega x is equal to the whole of the interval 01, remember that? What was the kind of symbolic structure of such points? Do you remember how did you, they had all combinatorics, what do you mean by all combinatorics? So if you look at the sequence, so let a bar be the associated sequence then a bar contains all possible finite blocks of 0s and 1s, that was the feature that guaranteed that the orbit was dense, then orbit of x is dense in 01, so as well as dense orbits there were periodic orbits in here, there was many different structure of orbits, so now that we have this new way of looking at it we want the probabilistic description of this orbit, so we want to know where this orbit spends most of this time, suppose that ax has a dense orbit, what can we say about where it spends most of its time? So suppose omega x is equal to 01, what about probabilistic limit set? Would it be, so what is your remark? Your remark is that for every, say again, for every point in 01, so you're saying that for all y in 01, delta direct in y, ah, okay, is this true? F is not continuous but it might as well be continuous, you know, if you think of, you can actually define this function on the circle, unit circle less one and then it becomes continuous on the circle, it's just a different way of looking, so this is, in this particular case this does not play a big role, but this is a good starting point, so what is, what would it mean if each direct delta is in the probability measure? What does this mean? We're trying to develop an intuition and a way of understanding and thinking about this limit. You see, this is true in some sense, in the topological sense, so the fact that omega limit is 01 means that the orbit of this point comes arbitrarily close to every point, but is this what the probabilistic omega limit set? In other words, is that where it's spending time? Where is it spending time? Where is it spending most time? Is it spending most of the time? Is it spending about the same amount of time everywhere or is the orbit concentrated and spending more time in some places and less time in other places? Is this distributed uniformly? Why? Can we? Okay, so remember that mu is in the omega probability, probabilistic omega limit set of x if 1 over nj, sum of i equals 0 to nj minus 1, phi composed with fi of x converges to the integral of phi d mu, okay, for some sequence nj tending to infinity. So what does this mean? What is this telling us? So what is this? This is the key, right? I mean, this we understand what it is. So first of all, we understand. Remember, I emphasized last time, what is this integral of phi? What this is, is somehow telling you, so it's an interplay between phi and d mu, right? Because this integral is a number, okay, that depends both on the function and on the measure. It depends on the function because this is a function. So remember, so this is also for all phi 0, 1 to r continuous. So phi is a continuous function, 0, 1, it's some continuous function. And if you just took the usual Riemann integral, you would just be looking at the area under the graph, okay? Here you're also looking at the graph, but with respect to this measure. So if this measure is, for example, the Dirac delta, the point, then this is just the value of phi at that point. If this measure is more well distributed, if it's Lebesgue measure, if this measure is Lebesgue measure, then it is really the same as the Riemann integral, okay? It's the area under the graph. So this value here depends on the function and on the measure. You need this convergence for any function, right? So that's why this is saying something about the convergence of measures to this measure. So what would it mean to say that it is, that this, that if you put here delta y, then this is convergent to the integral of delta y. This would mean that this is converging for some subsequence to the value of phi in y, right? So you say y, you look at the Dirac delta on this point y, and you want to know, is this sequence along the orbit of x, is this average value of phi converging to phi of delta y? This is the question. Is it converging to phi of delta y? So the best way is to give an example, okay, and see what happens. So let me, let me, if you remember, we made this observation that once you have an infinite sequence that contains all possible finite blocks, it does not have to contain only these possible finite blocks. You can always, in between any blocks, you can add any number of sequences that you want, any number of digits, and it will still be dense, because it still contains all finite blocks, okay? So let me take, so let A be the sequence associated to x, okay? And now let me do something. Let me start putting in some large blocks of zeros into the sequence, okay? Let us, let us construct a new sequence, a hat, which by adding many long finite blocks of zeros. So for example, my initial sequence is this, a zero, a one, a two, and contains all finite blocks. So now here I add zero, zero, zero, zero, zero, zero, zero, zero. Here I add zero, zero, zero, zero, zero, zero, then as I keep going, a n, a n plus one, okay? Here I add one million zeros. So this new sequence, and I keep doing this every so often, I can even after every finite block, I can add a million zeros, 10 million zeros, more and more million zeros. How many I want, okay? So this sequence that I construct will correspond to some point. I don't know which point it will correspond to some point here. Well, I can assume that my original point is constructed like this. So this is now, this is the sequence of my point x that I'm considering. So what is the, where is this point spending a lot of time? So suppose I iterate, I iterate, right? Every time I iterate, I shift the sequence to the left, okay? So I iterate a few times, and then when I iterate three times, the sequence of the third iterate will start with many zeros. So what does that mean about where this point is? You remember? It's quite close to zero, right? Because if you remember when we did the structure, we did the partition, if you start with a lot of zeros, it means you're very close to zero, because you stay in I zero for a long time, okay? And it will stay in I zero for a long time, as you can see. So when you iterate, you will have still a lot of zeros, and then slowly the number of zeros is reducing, and then at some point it will jump to whatever. This may be a zero or a one or whatever, and then it may jump away, okay? But then a few more iterates, and then suddenly the new sequence starts with 20 zeros. So it is even closer to zero. Not only that, but it spends about 20 iterates very close to zero. And then I jump to the other sequence, and then after however many I want, even just after one more finite block, I can put a million zeros, which means it comes very close to zero, and it spends one million iterates near zero. So I can organize this so that the proportion of time that it spends near zero is getting larger and larger, because I can put, for example, if this n is 1000, I can follow this by a million zeros. And then after the next block, after some finite number of iterates, I can put a hundred million zeros if I want. So I can make sure that the proportion of zeros is much, much bigger than the proportion of time up to time n that is spent outside this region. So even though the orbit is dense, I can make sure that the time that is spending in different regions is converging to zero, because the longer you wait, and the greater proportion of that time it is spending in a very small neighborhood of zero. I am choosing the point very specifically. I'm not saying this is the general behavior. I'm saying there exists a point whose orbit is dense, but it has so many zeros and so many big blocks of zero that when you look at the time averages here, okay, the amount of time what you're doing here is you're taking the average value of phi along this orbit. But this orbit is spending 99.99% of the time, 99% of the time in a very, very small neighborhood of zero. And so the average value of phi will be very close to the value of phi at zero, phi of zero. And therefore, this is the longer you wait and this is actually converging to phi of zero, which is the integral of phi with respect to the Dirac delta at zero, okay? So if I add enough zeros, I can construct, if you want to choose, I can construct the point for which the omega limit set of x is equal to the whole integral, but the probabilistic limit set is just equal to the Dirac delta at zero. Proving this is very easy. I will leave it as an exercise. Look, it's very easy. All you need to show is that this is converging, so let me leave it as an exercise. What is difficult is coming to terms with the idea, with the fact, okay? The proof is actually very easy because all you need to prove is that this is converging to phi of zero, because phi of zero is exactly the integral of phi with respect to the Dirac delta. So if this converges for every sequence, okay, you don't even need to take a subsequence, how do you arrange for this to converge for every sequence to phi of zero, okay? It's easy. This is taking the average value of phi along the orbit. So as long as the orbit is spending more and more time in an arbitrarily small neighborhood of zero, then this is converging to phi of zero, okay? So if you want to prove it formally, you'd have to say take epsilon, take an epsilon neighborhood of zero and show that this average after some long enough amount of time is closer than epsilon to phi of zero, okay? And to show this, you just need to use the fact that you have introduced many more zeros than there have been iterations, right? So after one million symbols, in other words, one million iterations, 99% of those are zero. And then after one billion iterations, you want to make sure that 99.9% of those are zeros of the symbols. So you look at the first, you know, you look at the first n symbols and you want one minus epsilon proportion of those symbols to be zero. And then you know that it will have spent a very large proportion near zero. So this fact depends on the fact that you have constructed the sequence in a specific way. So I emphasize that, okay? I have designed the sequence with this property. I've not just... But because I know that I'm free to design a sequence however I want, because I've already shown, we showed in the previous course that for any sequence you choose, there exists a point that has that sequence. So it's okay. I can just work on the symbolic level, okay? So I think because this might need a couple of minutes to digest, let's take a couple of minutes vague now, okay? And then we'll come back. Okay. So how is this connected to the notion of the probability of where the point is at a given moment? So what this convergence means, so the fact that we have this convergence to delta zero, so omega delta zero means to phi zero for all continuous functions. So this means, as I said before, that the orbit of X on average is spending all of its time very close to zero, more and more proportion of its time. So if you look at the first one million iterations of X and you say, okay, how many of these iterations does it spend to X? It would be a certain probability, which would be close to one. And the larger n is and the greater the proportion of time out of the first n is that it spends the time close to zero, okay? So in some sense, the probability of the orbit being in a small neighborhood of zero converges to one as n tends to infinity, even though the orbit itself continues to be dense. So infinitely often you will find times in which it is far from zero, but the longer you wait and the less likely it is that if you're picking it at a random, it will be far away from zero. The longer you wait and the more likely it is that if you're picking it at a random, it will be very close to zero for that particular orbit, okay? So this is the split between the topological description of the orbit that only sees the fact that there is some subsequence that converges to any point in the interval because the orbit is dense, but does not look at the probability or the density or the distribution of the orbit with respect to time, only with respect to space. Now I can have more fun and construct even more interesting examples, right? So suppose now I take the same point, okay? And after every sequence of zeros, I put a sequence of ones. How many ones I'll put? I'll put many more. If there's 10 zeros here, I will put, well, I can do whatever I want. So let me initially put just as many ones as I have zeros, the same amount of ones as I have zeros. So here if there's 10 zeros, I put 10 ones. Also 10 zeros, 10 ones. And then so I put 0, 0, 0, 1, 1, 1, 1. And then here, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1. And so on here, 1 million zeros followed by 1 million ones. What happens here? Where do I spend a lot of time? Around zero or one? That's right. Because if I look at the first whatever, 1 million iterates, 99% would be either zeros or ones. About half of them will be zeros and about half of them will be ones. So what is this converging to? Sorry? One half of delta, zero plus delta of direct measure in zero plus direct measure in one. So there are several possibilities actually depending on exactly how I construct, okay? And I will not go through because this is really just a motivating example. I don't want to spend too much time going through the examples. But there are several possibilities. So I can construct points. I can construct sequences which give points x such that there's various possibilities. So I could have, for example, omega probabilistic of x equals one half of delta zero plus delta one. Or I could have, for example, omega probability of x equals delta zero and delta one. So today I don't want to go into the details of the sequence because it's more, it's something you can do yourself. If I do it now, you might get a bit lost. But what I want to spend a few minutes on is discussing the difference between these two situations, which is very crucial. So what is the difference between these two situations? What does this say? What does this mean about the orbit of x? And what does this mean about the orbit of x? Just intuitively in terms of where it spends the time. What does this mean? It means that I'm converging to this measure. My averages are converging to this measure. And so the probability of choosing after many, many iterations, what is the, where is most of the orbit going to be concentrated on? In this case. If I choose a point at random, where is this point likely to be? In this case. It's very hard. Okay? So let me write down the definitions. So in this case, what does this mean? This means that one over n sum i equals zero to n minus one of phi fi of x converges to the integral of phi with respect to this measure, which is equal to what? Exactly. This is just one half of phi in zero plus phi. What does this mean? So then we'll come back and see what this means. Now what does this mean? Here I have two measures in my probabilistic omega limit set. Not one. What does it mean that I have two measures? What does this converge to? It's not convergent. Exactly. It's not convergent. So you need to take a subsequence and this converges to phi of zero. But you take a different subsequence and k converges to phi of one. What's the difference between these two situations? Sorry? What do you mean? We can't hear. You need to add something to both of them. What does that mean? What's b zero? Delta zero. Delta zero, delta one. What do you mean I have to add? I mean, what I'm saying is that so this phi of zero is simply the integral with respect. This is just the integral. This is by definition, this is equal to the integral of phi, the delta zero. This is equal to the integral of phi, the delta one. So what does this convergence mean? So I look, I take n very large. Okay? And so the average value of this in this case is equal to this. And why is that? What does that mean in terms of the dynamics? So what this is measuring is the value of phi close to the orbit of x. Okay? So what is the orbit of x doing for this average value of phi to be the average of the values in phi in zero and phi at one? It is spending a certain amount of time very close to zero and a certain amount of time very close to one. About half the time close to zero and half the time close to one. Okay? The proportion of time that is spending close to phi zero and phi one for n very, very large, it keeps spending about half the time close to one and half the time close to other. It's just like tossing a coin. Okay? When I toss a coin, if I toss a coin once, maybe I get heads. If I toss a coin twice, maybe I get heads again. But if I toss a coin a thousand times, I will probably get close to 50 percent heads and tails. If I toss it a million times, I will probably come even closer to 50 percent. Okay? Heads and tails, you can think of as zero and one. So I'm spending, I have, I get a sequence of zeros and ones, which is heads and tails. And the frequency of zeros is roughly the half, roughly the same. There's about half of your symbols as zero, half of them are one. And the longer you wait, and the more it tends to half and half. Okay? What is this? It's not so easy to interpret this. So the fact that there is a subsequence converging to this means what? Means that along some subsequence of times, so if you choose a certain subsequence of nj's, so you just look at those times nj's. And for those times nj's, you take the average of this from zero to nj. So you look at everything up to nj. And if you look at the sum, you look at all the sequences, you look at this, this orbit, okay? And then, well, you look at this average and it's very close to phi of zero, which means that your orbit has spent 99 percent of its time near zero because this average is very, very close to phi of zero. So if you choose nj correctly and you look at where you've been up to time nj, you spend most of your time near zero. And there's an infinite number of nj's such that you spend more and more time. But if you choose your nj's in a different way, it looks like you spent all of your time near one. So now I choose a different set of nk's and I choose this nk and I say, okay, if I take the average from that time to that time, then it looks like I spent all my time near one. So let me try to explain to give a more geometric example of this situation. Yes. Sorry? This, you have what? This is just the definition of the omega, of the probabilistic omega limit set. Yes, but I'm assuming that this is the only measure in the omega, the probabilistic omega limit set. So you're right, you're right, you're right. As it happens, it's true because the spatial probability measure is compact in the weak start topology and therefore every sequence has a converging subsequence and therefore they must all converge. But you're right, you're right. To be very precise, that is correct. Yes, thank you. Okay, so let me give a more geometric example of this phenomenon, okay, which I think is a nice example. I don't really want to. There is this. So let me give another example in which a similar thing happens. So this is an example like this in which we have, suppose we have two fixed points. So this is a map F from R2 to R2. It can also be a continuous time dynamical system of flow, but we haven't spent so much time with continuous time flows, but it can be either continuous or discrete. Suppose it's got two fixed points and these are, you can, you can construct it so they're linear maps, hyperbolic linear maps, right? So here you have stable eigen space of the linear map. Here you have the unstable eigen space of the linear map and here you have the unstable eigen space and here you have a stable eigen space. And then suppose you have these, all the trajectories on this unstable eigen space that lie on a curve and they converge to P1. So these stable and unstable eigen spaces, they join up in this way according to this picture. And then suppose that we have a point X here and the orbit of this point lies on a spiral that does this and that moves around and gets closer and closer to this. So the omega limit set of X, at least in the continuous time case, so it's a little bit easy in fact to imagine this as a phase portrait of a differential equation and then these are continuous time curves, okay? And the orbit of X is continuous time, but the, the idea is exactly the same of what happens. So the omega limit set is what? Can you see what the omega limit set would be in this case? It's the whole thing. It's the union of these two points plus these two curves that are joining them, right? So the omega limit set is composed by P0, union P1, union, these are called hetero-clinic orbits, hetero-clinic orbits between P0 and P1. Now where does this orbit spend most of the time? So here near this fixed point, it slows down because this is a fixed point, right? So when you come close to a fixed point, you don't move very fast along the fixed point, right? If you have a flow and you have a fixed point here and the flow looks like this and you have an orbit that moves along like this, right? Then how long does this orbit take to move from here to here? It takes a certain amount of time and if now I take a point that is closer to the fixed point, closer to this and it moves like this, then how long will it take to go from here to here? More or less time. It will take more time, right? Because this is a fixed point and so when it comes very close to the fixed point, it goes very slowly and so it takes more time. The closer you start here and the closer you come to the fixed point and the more and more time it takes to move from here to here, okay? So we use this to analyze the dynamics here. So you can take a little neighborhood of P0 here and you can take a little neighborhood of P1 here and then what is the time that it takes to go from here to here to go from the outside, from the edge of one neighborhood to the edge of the other neighborhood? This will take a certain amount of time but it will be a fixed amount of time. It does not depend on how close you are here because this orbit takes a certain amount of time to go from here to here and so this orbit will take some fixed finite amount of time to move from here to here. On the other hand, when it comes inside the neighborhood, it will take a certain amount of time. It will stay inside the neighborhood for a certain amount of time and in fact you can arrange by changing the eigenvalues here and here, you can kind of control the amount of time that it spends here and again I will not do all the calculations but you can control it in such a way as to do something very interesting. So you can suppose that the time it takes here, because it goes very slowly, the time it takes here is of the order of magnitude of the whole time that it took the point to go from here all the way to here. So by the time you, if you stop here at some point n1 here and you measure this is x0 and you measure the amount of time that you spend inside this neighborhood compared to the full amount of time from time 0 to n-1, for example half of that time is spent just in this last little bit. But then you can also fix it in such a way that the distance at which you come out is much closer to this line of the distance when you, which you came in. So now when you come in the next neighborhood here you're coming in much closer to this neighborhood here, okay? So then you spend a lot of time in here because you go very very very slowly. You can spend so much time in here that the time you spend in here is 10 times as much as the time you spent in here. And so five times as much as the whole time that you spent from the beginning all the way to get to here. So if you say okay until this moment here this is some time and two. Until this time here where have I spent most of my time? And I spend, ah, I spend most of my time here because out of 100 units of time I spend 90 units of time here and only 10 to get here. So I say oh I spend most of my time here, okay? But now I come out even closer than I was before. So now I reach in some small finite number of times I reach here but now I'm going to pass very close to the fixed point. So close that the amount of time I spent here is 10 times bigger than all the time I spent until I got here, until I got here, okay? So now at this moment here I stop and I measure my averages and I say well I spent 90% or even 99% of your time if you're if you're constructing the right way in this small neighborhood. So it looks to me like the time averages are converging to p0 because I spent 99% of the time inside p0, okay? But then I come out, okay? You start to see the pattern and now I'm very close here. So because I'm very close I go really really really slowly here, okay? So slowly that the time that I spend now to go from here to here is 99 million times more than all the time that it took me to get to here, okay? So if I take my averages here I say oh I am spending 99.9% of my time in p1, near p1. So the probability of being near p0 or near p1 is oscillating, okay? This is the crucial point. So depending on when I measure it looks like my averages I have spent most of my time near p0 or most of my time near p1. Which of these two cases does this example that this situation corresponds to? Number one or number two? Number two. So in these situations the time averages are not converging because depending on where you see this subsequence nj that is converging to phi of 0 means that I take, I always wait until just after I've been inside this neighborhood, all right? And if I always wait until I come out of this neighborhood then it will always look like most of the time 99% and then 99.9% is spent inside this neighborhood. But if instead I take the subsequence of times and I wait exactly when I come out of this neighborhood then it looks like I've spent, I am always spending most of my time in here. And so both of these measures belong to the probabilistic omega limit set. What would this situation here correspond to in this picture? Because here also I can change things a little bit depending on the eigenvalues. So this picture was constructed by saying that the eigenvalues are designed in such a way that when you come out you're much closer so when you come here you spend much much more time in order of magnitude of time larger here than you spent all the time before. But is it possible to construct this kind of situation where you spend half the time here and half the time here? Well you need to do the calculate this but it is possible, okay, if you do it and what does this mean exactly? What this means is that roughly the time you spend there's not much difference so when you come out of here you spend a certain amount of time and then when you come back in here the amount of time you spend here is of the same order of magnitude as the time you spend here. It might be a little bit more but then the next time here it's a little bit more and here it's a little bit more but the order of magnitude is the same. So in fact you always the amount of time you spend here is about half of the time that you've spent in here and then when you come back here the time you spend here is about half the time of all the time that has gone before and so on. So in that case, you get a situation where it is converging and it's converging to this linear combination of the two. So the frequency, the average amount of time is converging to the fact that you're spending half your time here and half your time here. Whereas this does not mean that you're spending half the time here and half the time here. It says that depending on where you stop, it looks like you're spending all your time here and then at other times it looks like you're spending all the time here. So it's a little bit abstract, this explanation, but I'm trying to emphasize the reason why we're interested in the fact that the probabilistic omega limit should have only one measure in this limit. Because then it means that the statistics of the orbit are well behaved, they're not crazy. In terms of tossing a coin, this situation means that suppose I toss the coin 100 times, then maybe I see 90% heads, but if I toss it a thousand times, I see 90% tails, but if I toss it a million times, I see 90% heads. So the proportion of heads or tails that I see would depend on the scale at which you're looking at your system. This is in some sense the bad situation. This is what happens when the probabilistic omega limit set has several measures in it. So all of this really is to set out the main question. So question, when do we have an omega probabilistic? And we would like to study some properties of this measure. So IE. So we can formulate this problem as follows. So let X space, let M again equals the space of Baudel probability measures. So for mu in M, let B of mu equals the set of points X, such that sum i equals 0 and minus 1 converges to the interval Baudel. So we're going to study the problem in this way. So the problem I have tried to motivate, I said you've given a system and you can have lots of different kinds of behavior for the statistics of the orbit. So we would like to know that generally the statistics is well behaved. So the fact that the probabilistic omega limit set contains a single measure is saying that that measure is describing in a quantitative way the dynamics of the orbit in terms of its distribution in time. So that is good. That is a description. That is a more sophisticated description if you want than the topological limit as I've shown you, that you can have an orbit that is topologically dense but it spends most of its time near zero. So we're looking from that point of view. And from the example I gave you, I think I illustrated the fact that if there are two subsequences converged to different measures, that is not a very pleasant situation because that means that the statistics of the orbit depend on when you measure or when you look at the orbit. So in terms of application, phenomena, in all terms, this is perhaps mathematically interesting situation but it's a situation that we would like to say does not happen most of the time. So to study this situation, it turns out to be more convenient to fix a measure and say, okay, let's look at the set of points where these averages converge exactly to the right measure. So rather than just picking a point and looking at all different points and seeing what happens, let's fix a measure and see if there are any points whose time averages converge to the average with respect to that measure. This is called the basin of the measure mu. It's in some sense the basin of attraction of the measure mu. So in the example that I showed before, if you take the 2x mod 1 and mu the Dirac delta on 0, then we constructed a point. What we did before was to construct a point that had this property so we showed that there exists at least one point that is in the basin. Okay? In general, of course, we don't know that there exists any point in this basin. But all the points in this basin, their asymptotic statistical behavior is described by the measure in this sense. So the question is the basin non-empty, the first question. So I think maybe this is a good moment to stop. And what we will do next time is we will start introducing the results and the notation to be able to address this question in quite general.