 Okay, good afternoon. So if you remember, at the end of the last lecture, we introduced this new concept called sensitive dependence on initial condition. So let me remind you, this is a general notion for maps of metric spaces. So F exhibits sensitive conditions. If there exists epsilon, let me make sure I write it correctly here, such that for all x in x and for all delta greater than 0, there exists y in x with distance between x and y is delta and n greater than 0, such distance between n and n. So why is this an interesting notion? It's to do with predictability. I will make now a little, just a couple of minutes, a little parenthesis, a little bit philosophical parenthesis about dynamical systems and the history of dynamical systems. So dynamical system in some sense is motivated by applications by the study of systems that evolve the change in time. These systems can be mechanical systems where you want to study the effect of something moving under some forces. That's a dynamical system because it's something changing in time. Or it can be just trying to model a population, the different generations where there is some dynamics that jumps from one size of population to another. There's a million ways in which a dynamical system can be used as a model to model some natural phenomena. Now it turns out that for some natural phenomena, it's much more successful than others if you think of this dynamical system as existing to be able to predict what happens. So all of you know that one of the real success stories of mathematics and physics in some sense is the application, the study of astronomy. So the study of the motion of the planets is one of the first fields in human civilizations where people really that motivated a lot of mathematics and a lot of study. And amazingly, accurate predictions about astronomical phenomena have been made even hundreds, even thousands of years ago. It's possible to predict exactly when the moon, there will be certain eclipses or when certain planets will be in certain positions. And this is by modeling the system, the planetary system, with some equations that depend on your knowledge of the physics of the situation of Newtonian physics. But there's other phenomena which are much more difficult to predict. For example, the most typical one is the weather. You all know the weather is also a physical system. It obeys physical laws. The development of the weather depends on what the temperature is and what the pressure is and the humidity. And all these things and the geographical landscape determine how the weather develops. And physicists know basically all this stuff. And a little bit we can predict the weather, right? So a little bit usually you can say tomorrow probably will be like this or will be like that. But to say next week will be like this or to say that next month on a certain day at a certain hour, the weather will be in a certain, will be is impossible. So what is the difference between the planets as a physical system and the weather as a physical system? And why is it so, why is one so predictable and why is another so unpredictable? It's not because for a long time people thought, well, the weather is just too complicated. The weather has too many factors, too many variables, too many degrees of freedom. This in part is true, but it's not the essential reason for which the weather is unpredictable. There are many simple systems which are quite unpredictable. For example, if I take a coin and I toss a coin, this is often thought of as the most typical random system. You toss a coin. But of course there's absolutely nothing random about tossing a coin. You have an object that has a certain mass, a certain shape. You apply a force to it. It flips a certain amount of time. The number of times it flips depends exactly on how you apply the force and where you apply the force. And the outcome depends exactly. It's completely Newtonian classical mechanics that determines the evolution of the coin. So this is not a random system. This is another system that we can model. We understand the physics of it. We can define a dynamical system to model it. But the fact that we can model it does not make it so that we can predict the system, the evolution of the system. And the weather is actually very similar to the effect of the coin. And what is the reason then that the coin is so unpredictable? What would you say? Why is tossing the coin unpredictable? Why? Why can I not tell if it's going to be heads or tails and I cheat and I always win when I toss the coin with my friends? Even though we know, I think the essential problem is that the mechanism by which we toss the coin is not sufficiently precise to allow us to control that force with sufficient precision. So because we need to kind of flip it in this way, it's just too delicate. And if you think that if you flip, it flips the coin before it falls, it flips maybe 10 times. If you apply 10% more or less force, it will flip nine times or 11 times instead of 10 times. And so it will come out tails instead of heads and so on. So the margin of precision, if you were really flipping something big that only flipped one or two times, then you would be able to control the force to a sufficient level of precision because the difference in force between flipping just once or just twice would be sufficient. But because it's flipping many times, just a small change in the force will make it flip one more time. And we really cannot control the number of times, okay? So how is this related to this? If you think of this as dynamical system, I'm not going to try to give a precise model of this coin flipping, but just as an idea, if you think of dynamical system that models the physical of the system, then the evolution of the system depends on the initial condition. And if you assume that everything else is fixed the position of the coin, the size of the coin and so on, then the only initial condition has to do with how much force you apply to it. That's the initial condition that determines how the system is going to evolve, right? So the fact that if we just apply a very small difference in initial condition, we get a completely different outcome, this is intuitively sensitive dependence on initial condition, okay? When you think of it in practical terms. So sensitive dependence initial condition means that even though you take two points that are really close together, arbitrarily close together, right? For all delta, at some point, they will be very different. So I'm using this epsilon that can be a little bit misleading maybe because generally epsilon is used to think of as a small number, okay? But the point is that epsilon is fixed. So in relation to delta, we think of epsilon as a large difference. If two situations are epsilon different, it could be like heads or tails, sunny weather or cold snowstorm, yeah? It could be just two completely different things. And we are looking at two initial conditions which are very close to each other. And nevertheless, if we wait long enough, the outcomes of the initial conditions are completely different because they're separated. So if this is related to the modeling of a physical system and the real system, the real initial condition is x, but the measurement you make of the initial condition contains a small mistake, okay? And really what you measure is the initial condition y. And then you run the model, your prediction based on your measurement y will give you one thing after a certain amount of time. But the reality will give you something that is far apart from that, right? So you're trying to predict the weather. Our measurements are not perfectly accurate and that problem with the measurement means that we cannot predict because the weather in one week's time really depends on level of precision of the measurement which we just cannot achieve, right? This is the essence of this. And that is why this has become so in the, this is something that actually was known by mathematicians and physicists even two, 300 years ago. It was understood that even though if you have a differential equation, one initial condition has a uniquely determined evolution, nearby initial conditions perhaps might have different evolutions. But it did not really become really understood how dramatic and how important it is until people started doing simulation with computers, essentially in the 1960s and the 1970s and so on. And this notion of sensitive dependence to initial conditions was coined by Lawrence, Edward Lawrence in the 1960s as anyone heard of the Lawrence Attactor. So Lawrence was studying, it was actually meteorologist. He was actually studying the weather, right? He was trying to study the weather and how to predict the weather and there were lots of mathematical models for the weather. And he said, well, I'm going to study a much, much simplified model of the weather because the models of the weather were extremely complicated. So he said, I'm going to study a very simplified model which is a system of differential equations just in three dimensions, okay? Called the Lawrence Attactor. Now it's called the Lawrence Attactor. And he said, well, if we can't understand the complicated models, at least let's try to understand the simple models first. And he had this computer, you know, the computers in the 60s were not like the computers we have today, right? So he had computer that was probably in some air conditioned room as big as this room and he had to, he did not have a monitor that would just plot the graph of this equation. He would just have to put in cards and get out numbers. And it was very difficult to study these equations. But he was trying to study them and he was looking for some periodic, fixed points or periodic solutions because as you see also, our first approach in dynamical systems is to look for fixed points and periodic points and simple behavior like that. And he could not find it, he could not find it. And he was plugging an initial condition and trying to integrate the equation and trying to find out what happened and he could never find the system. And then what happened one day, he says this in this paper that he published in the 1960s, right? So what he was getting was this list of numbers that he then had to plot into a graph and try to find some periodic solution. And one day he had this list of numbers and when he started the next day, he started the program instead of just starting from the last number and plugging it into the computer and continuing the program, he took some number up here and he plugged this into the computer and he waited for the computer for the program to evolve and after a very short time the numbers were very different from the ones he had the previous day, right? So the previous day starting from this initial condition which was, so the previous day he started with some initial condition x0 and then he had got up to some xn, you know, a whole bunch of points of the system. And then the next day he come in and instead of plugging xn into the computer and continuing he plugged in some intermediate value, xi, right? He called it yi and he plugged into the computer and very soon this y did not correspond. When he got to yn he had something completely different from xn and so he was very worried and he was didn't understand what was going on. Maybe the computer is making some mistakes which would invalidate all his studies and so on. Can anyone explain? Does anyone have some idea what was going on? So why is that? So he still put in the same initial condition. So even though you have sensitive dependence it's always true that if you have a dynamical system one point just has a well-defined trajectory, right? In general. But the point is that he did not insert exactly the same initial condition because it turns out that the computer memory was working with six decimal places. But the computer was printing out these numbers only to three decimal places, right? So when he looked at this number and then the next day he plugged this number back into the computer he plugged in a number with three decimal places which therefore was different from the number that the computer was using in here which had six decimal places. And so the two numbers were indeed different although he thought they were the same number. And because people did not, they knew. He knew that the computer memory was working with six decimal places. But he just did not really realize that even fourth decimal place which is when you're thinking really of practical measurements, it's like measuring things in meters and worrying about the millimeters or fourth decimal place is a small number. And he took him a while to realize that even that small, very small difference in the fourth decimal place could very quickly make the two trajectories be completely different. So this is just a little bit of the history of the modern theory of dynamical system. This is the beginning in some sense of the theory of chaotic dynamical system. So dynamical systems which exhibit sensitive dependence on initial conditions. Which is a class of dynamical systems. And what I want to show you is for example, the shift map and the maps we've just been studying they exhibit sensitive dependence on initial conditions. So sensitive dependence on initial conditions is nothing mysterious. It's something that occurs very much. But I just gave you this little parenthesis to make it a little bit more, less just technical and dry. This notion of sensitive dependence on initial conditions. Okay, so let's check that the shift map satisfies this lemma. So shift map sigma from sigma L plus sigma L plus conditions with epsilon equal to one. So the proof is trivial, right? How do you prove this? So the result is quite remarkable, right? It's a very simple map. But what we're saying is that arbitrarily close initial conditions eventually are far away of order one, right? So proof, yeah? Take any element, any two elements, right? Yeah, so take any two elements arbitrarily close and how do we know that they move apart to distance one? Yes, yes, if they're close, yes. So wait, let X difference from Y in sigma plus M. So we need to show that at some point some two images of this separate the distance between some images. So the distance between sigma N of X and sigma N of Y is greater than or equal to one. Sigma N of X and sigma N of Y. That's right, yes. Okay, it's the same thing actually. If you think about it a little bit, all I'm saying, so I formulated it in this way, but here we have even a slightly stronger version. So the way what we can say here is that any two points, any two distinct points after some number of iterations, the images have distance greater than or equal to one, right? Which implies this, it's slightly stronger. So this is not just for every delta there exists a Y, but in our case, we will have for any X and Y, however close at the existing N such that this is greater than or equal to one. Okay, and why is that? You were saying? Yes, exactly, exactly. So let me say this. So the fact that these are different means that they must exist. So X different from Y implies that there exists some N bigger than or equal to zero such that if X is equal to X zero, X one, X two and so on and Y is equal to Y zero, Y one, Y two and so on. X N is different from Y N. Because if they're not, they exist such an N, then they would have to be the same, if they were the same sequence, right? And therefore clearly, then the distance between sigma N of X and sigma N of Y must be greater than or equal to one because they have the first term of their sequences now will be different, okay? Very simple proof. It's just intrinsic to the shift map that this holds. What about the lambda, the tent map? So if we have our tent map that we had, we have been studying zero, one-third, two-thirds, one, I one, we have our usual lambda. So lemma, F restricted to lambda exhibits sensitive dependence on initial conditions with epsilon equals one-third. And why is that? So this is a very simple proof. You don't even need any of all the construction that we did before for the conjugates. It's just almost immediate from the map. Why is that so? Take any two points arbitrarily close X and Y in lambda. In lambda, so let X different from Y in lambda. I have some iterate, the distance will be greater than one-third, greater than or equal to one-third. Why is that? Well, if they're already on opposite sides of I zero and I one, then this is obvious because this length is one-third. So the distance is one-third. So all we need to show is that at some point they land on opposite sides of this. One is an I zero and one is an I one. That's right. So now you're using, you're basically using the conjugacy that we have here. Yes. You can know that's correct. So using the conjugacy, basically it's for the same reason here, the two points are different. They have different combinatorics because we have proved that. And so it works. You can also argue very simply, again by the mean value theorem that if you look at the interval between X and Y, then this interval, the derivative is three here. So the size of this interval is growing all the time as long as this interval stays inside I zero or I one, it's growing. And so sooner or later it has to grow bigger than one-third and they have to be on opposite sides. Okay. So whichever way you want to look at it, there's various proofs, but using the construction. So it's simple. So there exists some n greater than zero, such that fn of X is in I zero and fn of Y is in I one or vice versa. Okay. And so the distance between fn of X and fn of Y is greater than or equal to one-third. Okay. So finally, before we finish this chapter, let me make a comment, something for you to think about, about how this conjugates. So when we conjugate this lambda to the symbolic dynamics, there is a map that goes from the ship space into the interval, right? For each sequence, you get a certain combinatorics. Now, if you remember in the last lecture, I gave some generalizations and I said the whole construction can be easily generalized for an arbitrary finite number of closed disjoint intervals and you can do the same thing. And you don't really need to worry even about what happens outside, right? All you need is for the map to be well-defined inside these intervals and you can do the same construction. Now what I just want to, and also if you notice in the pictures that I do, it's, of course, for this conjugacy to hold, this picture does not have to be necessarily like this. It can also be, for example, like this. So in one case, the map can be like this. So where the derivative, these are always straight lines, but also they don't need to be straight lines, right? But just to simplify, even though they're straight lines, here the derivative is positive and here is negative and all you can also have the situation where you have both, in both cases, the derivative is positive. So in both cases, the whole construction was perfectly well, of course, as you, if you look at the construction, you see that nowhere did we worry about the orientation, right? For the future, we'll talk about similar situations, so we can say that in this case, we have positive orientation and in this case, we have negative orientation. We never worried about it. So in both cases, we have the conjugacy to the shift space. So in both cases, we have this, let me call this H1 and H2. But the way, so because this conjugacy is a bijection from the space of sequences to lambda, which is contained in the interval, you can think of it as a kind of embedding of this shift space inside the interval, right? What every sequence corresponds to a unique point somewhere in this interval. So it is a natural question to ask, okay, how does this embedding work, right? What does this embedding look like? And if you remember as part of the construction, we saw when we actually construct this embedding, we studied a little bit how this happens because what does it mean to say that this is I0? So this is I0 and this is I1. What this means is that every point in here will have a sequence that starts with zero and every point in here will have a sequence that starts with one, okay? So if you think of the embedding of this space in here, you can separate this into two subsets. One is all the sequences that starts with zero and the other one is all the sequences starts with one. All the sequences that start with zero are mapped somewhere inside this region of the interval and all the sequences that start with one are mapped inside this region of the interval, okay? So that's the first observation about how this is embedded in the interval. Then you can look a little bit more closer. So you see that there's a finer structure, right? So this structure, I0 maps to the whole interval like this, this is the interval zero one, this is the interval zero one, which means that here there is a smaller interval, yeah, so this is I0, this is I1. Here there's a smaller interval which I call I00, right? Which is all the points that belong to I0 and that image also belongs to I0 because this is exactly, if you remember, so this is I0 and this is I1. So this interval here is exactly the interval gets mapped by the graph into I0, right? And here there's an interval which I call I01 because it is exactly the interval that gets mapped into I1 and I do something similar here. So here I have an interval which is called what? I11. Because look, it maps to I1, right? So I11 and here is an interval which is called I10, okay? What does this mean in terms of the embedding? It means that we can divide sigma two plus into four disjoint subsets. The union is all of sigma two plus and you look at all the sequences start with zero zero and all of these sequences will be mapped somewhere in here and so on, okay? You look at all the sequences start with one zero. So the embedding, the way this is embedded has a certain structure that you can understand by defining these finite time intervals that we studied in some detail, okay? Now what is the situation here? How different is the situation here? So here we also have an embedding, okay? But here, and here we have I0, we can also call this I01, I0 and I1. So on the first level, we have the same embedding. So if you look at the first two big subsets of sigma two, the sequences start with zero and the sequences start with one, they also get mapped to the two intervals I0 and I1. Now how do the smaller subsets get mapped in here? Well, we have a similar thing. Here we have, what? I00, here we have I01, here we have I10 because this belongs to I1 but its image belongs to I0 and this is I11. So you see that there is a difference here. There's a difference here. These are swapped. Yeah, when you think about it, it's natural. I mean, it's clear why this is the case, okay? But this continues. So as an exercise, you can think at home about how this continues. So the fact that we have an orientation reverse, this is in some sense the more natural ordering if you want on the line, right? Whereas here, these are because this orientation reversing is swapped and this orientation reversing means that things continue to be swapped at deeper and deeper levels. So if you now look, for example, at this element here, how this is broken up, this, remember that each one of these at the next stage gets broken up into two intervals, right? And so you need to say, okay, one of these is going to be 010 and one is going to be 011. Which one is which? The left hand is what? So let's see, I have to think about that. So this is zero. This gets mapped to I1, right? And then from I1, this gets mapped like this, right? So the left-hand side gets mapped to the left-hand side of I1, which means it then gets mapped to I1. So this will be 011 on the left-hand side. And the left-hand and the right-hand side will be 010, okay? Because if you look at the way this gets mapped in an orientation-preserving way into I1, but then here's orientation reversing, so it gets flipped. So this map, so the left-hand side of this will then be mapped to I1 instead of I0 and it flips. Okay? So it's a little bit subtle, okay? And I'm just giving it this to you because I think you should spend a little bit of time at home looking at these various possibilities. There are other possibilities. You also have the third possibilities in which both branches are orientation reversing, right? You could also have this possibility. And you can study the different embeddings in each of these cases. So the way the topological conjugacy, because these are all topologically conjugate to each other, will come back to this. So this is a fairly subtle point. What we have here is that how can we have a topological conjugacy between lambda here and lambda here? If this topological conjugacy has to map this interval to this interval and this interval to this interval, how can we find a homomorphism that is going to preserve the combinatorics because remember the conjugacy, everything is conjugated to the same here, right? And so the way you construct the conjugacy between lambda here and lambda here is just by composing these two conjugacies. So the topological conjugacy between the lambda here and the lambda here is the conjugacy that maps all the points that have the same combinatorics, two points that have the same combinatorics, right? The conjugacy exactly goes via the combinatorics here, which means that the topological conjugacy between lambda here and lambda here has to map this to this and this to this, so it must switch these over. Where's it here? It maps this to this and this to this. So tell me, how can we get a homomorphism that maps these two like this, but then switches these over? How is it easy? It has to be a homomorphism. What homomorphism? No, no, no. A conjugacy has to be one homomorphism. Do you understand the problem? You don't understand the problem. Do you understand the fact that these two, that when I define our mindsets lambda here and lambda here, lambda prime here, I have two counter sets, right? Both of which are topologically conjugate to the shift map. We agree with that? Yes? Okay. You agree that I'm saying that the way this shift map is, what this conjugacy does is it maps each sequence to the corresponding point that has that sequence, that has that communitlex. So using both the fact that both of these are conjugate to this, we get that the two sets lambda, F is sticked to lambda here and G is sticked to lambda prime here, are topologically conjugate using that. And this conjugacy maps points with certain sequences, to the corresponding point with the same sequence, right? This conjugacy is exactly the conjugacy is the map that preserves the combinatorics because that's how it's defined, right? Because this is exactly how both of these are defined here. But my problem is that this conjugacy is supposed to be homomorphism and here, as you can see from the combinatorics, they are all twisted, not just at this level, but inside. Inside here, this will be the opposite as here because here it's just orientation preserving. So at every level, they continue to be twisted. Well, homomorphism is just a homomorphism, you can. Excuse me, say that again. Was that a solution or was that a question? You said that your strategy doesn't work. Do you understand the problem now? Sorry? So you're going in the right direction here. Identity between counter-sets, what does that mean? That's right. So on what set is the conjugacy defined? No, it's not identity. Counter-set is just point. No, that is going in the right direction. So I will come clean. I'm trying to cheat. I'm trying to fool you, okay? Because that is exactly the right observation and this is what I'm trying to draw your attention to is that this conjugacy is not, does not need to be a conjugacy, a homomorphism from the interval to the interval. Indeed, it is not a homomorphism on the interval to the end. It is a homomorphism from the counter-set here to the counter-set there. And homomorphism between counter-set, counter-sets have a different topology from the interval so there's no problem in defining its homomorphism in such a way because the counter-sets has lots of open and closed sets inside and so there's no problem to define the homomorphism in such a way that it does not extend to a homomorphism of the interval. The embedding, the way these counter-sets, this counter-set is embedded in here is a way that is not, the two counter-sets are homomorphic but the homomorphism does not extend to a homomorphism of the whole interval. They're not homomorphic as embedded counter-sets with that structure, okay? The homomorphism is defined only on the counter-set and so it's not the identity. It's not right to think of it as the identity on the counter-set but it's going in the right direction. The point is that you are just conjugated dynamics on this counter-set with the dynamics on this counter-set and that's where the map is defined, okay? Yes, yes, exactly. So you can do that because you do not need this homomorphism to extend to a homomorphism of the whole interval. It cannot do that. This is a homomorphism. It's a map, this composition's map that is only defined on the counter-set here and it maps to the counter-set there and you cannot extend it to a homomorphism from the whole interval to the whole interval. It does not because it's all twisted. So look at some examples and think about it and think about this issue, okay? Because it's a level that is a subtlety that is important to comprehend because sometimes the proof is just so fairly simple in some sense that you forget that it is hiding some fairly subtle issues in here, okay? You say, okay, yeah, they're all topologically conjugated and so on. It seems simple. Any more questions on this? Okay, so this topic is not completely closed but we'll open in some sense a new chapter which is very closely related to this but as I said last time, we're going to consider. So let's think again of the family of tent maps. Let's record the family of tent maps because they're shaped like a tent and we saw that for lambda greater than two, they look like this. This is one, this is one half. So one half is the turning point between the negative and the positive slope and as long as lambda is strictly bigger than two then this implies that f lambda of one half is strictly bigger than one and we have this picture here and as long as this is strictly bigger than one we always have a small interval, open interval here which is our delta and we know that all the points outside zero one they just map to minus infinity all the points in delta map to plus infinity and here we have two intervals i zero and i one the derivative is lambda, sorry, this, yeah. So the derivative is bigger than two here, okay? Particularly it's bigger than one. So this is expanding and therefore we can do the whole construction that we did before, okay? You realize that that this is exactly the same situation. The only difference is that lambda is slightly different is not exactly the case lambda equals three but we have i zero, i one and it falls exactly into the general generalization that we gave before in which we have two closed this joint intervals that map to everything else and even in this case we have the conjugacy we have a canto set here and we have the conjugacy to the full shift everything is exactly the same. Now the question we will be interested in now is what happens when lambda equals two, yeah? So when lambda equals two, the picture is this it's somehow almost exactly the same but not quite. So now what we have, for points that are outside the interval zero one we still have exactly the same situation but if you define now, first of all if you define lambda equals the set of points x such that fn of x belongs to i zero or i one, well, we can still define i zero one so we can still define i zero union i one or even just the whole interval i for all n greater than equal to zero and what is this set? This is the whole interval simply, right? Because the interval gets mapped inside itself so every point in the interval zero one gets mapped back into zero one so the whole interval is invariant. So we no longer have that this set is a canto set. We have that this set is the whole interval. In particular, we will not be able to have topological conjugacy from this to lambda. Why can we not have a topological conjugacy from sigma two plus to lambda? Exactly, it's a canto set. This is an interval. You cannot have a homomorphism between a canto set and an interval, okay? But of course the dynamics is very, very similar to this. You imagine, okay? What has changed? We still have i zero i one, it looks like, I mean it's not like all those periodic points and all those dense orbits can have just disappeared, right? Until up to anything lambda greater than two you still have this canto set with all these periodic points and all these dense orbits and all the sensitive dependence on initial conditions and so on. And what has happened of course when lambda is very close to two this gap is very close. So also all the pre-images are very small. In some sense this canto set is thicker if you want, okay? Although not in a precise way. The gaps are all much smaller. And what we've done here in the limit we have closed this gap and the moment you close this gap you close all the gaps of the pre-images so you're taking this canto set that is embedded in the line and you're closing all the gaps of the canto set. And identifying the points of the boundaries of these gaps and you're getting an in-tool, right? So we have to try to understand exactly what that means for the dynamics and how we can describe precisely this process in this situation. So it turns out that it's not really that difficult. We just have to understand closely what these identifications mean, okay? But so what I'm going to do rather than what I did in the previous case is we studied this particular example in detail and then I gave you these generalizations. I think in this case it makes sense just to write out the general formulation right from the beginning, right? So we want to generalize this picture here and we want to have a similar picture in which we have zero one and we have some finite number of intervals but this time with no gaps, I zero, I one, I two, I three. And then we will still want this kind of picture so we will still want each interval to be mapped onto the whole original interval. If we want the map to be continuous we have to have the orientations positive and negative but that also will not be really a necessary part so to make it even more general we will just assume that we have, for example, this case is negative orientation, here we got positive orientation and maybe here we got negative orientation. For example, something like this, okay? So you can see that the, you can also see this as the limit of a situation in which you start with four closed intervals which are disjoint so they have gaps, okay, which would be similar to this situation here and then we can apply our general theorems that we had before, when you have gaps we just get a counter set but what we're doing now is we are closing so the bound, so these intervals are no longer disjoint because they intersect just at the boundary and then we have a picture like this and then we have to study what happens here. So let me write down the formal definition. So a map F from a unit interval or an interval I to I is called, is a full branch piecewise expanding, is let's say full branch piecewise expanding. If, so there are various ways to formalize the way these intervals are so I will do it in one way but you could say it in different ways so if there exists disjoint open intervals, I zero and minus one, lambda greater than one, such that for all I equals zero and minus one, we have these three conditions. First of all, the closure of I is equal to I to the derivative X is greater than equal to lambda I so these are basically exactly the same conditions we had before and then the crucial thing is that we don't have any gaps so the way we write it is that I is just equal to the closure of the union. I think this takes care of it. Another way would be to write that you have a union of closed intervals that intersect only at the boundaries, there's various ways of saying it but I think this is one simplest way. Okay, you recognize the picture in this definition here, okay? So this says there's no gaps and this says that each interval is mapped to everything in a uniformly expanding way because the derivative is bigger than lambda. In particular, it means it's a bijection, right? Because the derivative, I'm assuming here that so because the derivative is bigger than lambda, it means it has to be monotone on each branch. Yeah, we can also include C1, the two EI. Well, it follows from the fact that the derivative is bigger than lambda really, that but we can say C1 differ if you want. It's really not so important. C1 differ, let's assume that it's a C1 differ. Because the derivative is bigger than lambda, it cannot fold back on itself, so. So, what is the theorem? The theorem, F full branch piecewise expanding. There exists continuous surjective map H from sigma L plus to I such that H composed with sigma equals F composed with H. H fails to be injective on a countable set at star in I on which it is 2 to 1. So, remember, I think we discussed some examples before. So, this is not quite a conjugacy. What is this called? Semicondugacy, exactly, right? Because it's not necessarily invertible, but it's a continuous map that satisfies this. So, the fact that it's surjective is important. Why is it important that it is surjective? Sorry? The definition of full branch? Oh, what is piecewise expanding? Ah, okay, that just is because usually it's just a terminology. Like usually, if you say expanding, you are implicitly saying that it's C1 everywhere because expanding, usually you use to say that it's C, you know, the derivative is bigger than lambda everywhere and here there are some points where the derivative is not defined. So, piecewise, it's just a way to draw your attention to the fact that it might not be continuous. The map might not be continuous or might not be differentiable everywhere, okay? It's just not a, it's just a terminology that has become standard. I could just call it full branch expanding map. It's, yeah. When you see, whenever you see in a book, this is piecewise C1 or piecewise expanding. What it generally means is that you have some partition of your neighborhood and on each domain, it satisfies that property, but there might be some boundaries in which it's not C1 or expanding or continuous and so on. So, surjective is important of course because if you just, well without this sentence here, you know, otherwise it's trivial, right? Forget about this injectivity. Suppose you don't know anything about injectivity yet. Why is it trivial that there exists a continuous map satisfying this if I don't include surjective? No, not just what we want, but why is it trivial that this is true? I can find such a semi-conjugacy if I don't include surjective in here. Yes? The intervals are disjoint, and so? Where is lambda? Lambda in this case? No, no, but now we are looking at this kind of, this map here, right? We, F is a full bunch piecewise expanding map, so it looks like this, okay? My question is the following. Suppose my statement was just, then there exists a continuous map H satisfying this. Why is this trivial if I do not include surjective? Yes? Well, no, you cannot restrict yourself to one interval because this interval is mapped outside it. I mean, which, okay, so, okay, you're possibly in the right direction. I don't know if you knew it or not, but which are the points that stay inside one interval forever, that stay inside the same interval forever? How many points is that? Which points do you think? Are there any points that stay inside I zero forever? There's a fixed point here. Each branch actually has a fixed point, okay? So this point here will stay inside I zero forever, but how is this related to my question? So my question is, let me, because I think it's important to think about, let me, suppose my theorem was the following. F, full branch piecewise expanding, okay? Then there exists continuous map H from sigma L plus to I, such that H composed with sigma equals F composed with H. I claim that the proof of this is trivial. What is the proof of this? And what's the difference between these two statements? Forgetting about this part, just a little bit, okay? Between these two statements. Forgetting about this part, just the first part. Give me a continuous map, such that this is satisfied. So you need to map the sequences to some points here in I. Suppose it's not subjective, which means it maps to some subset. So you are going in the right direction, saying the canto set and the one interval and so on, but give me just one easy example. So what if I map, so what if I call this fixed point P? So let P be a fixed point. Let P fixed point for F and let H of X bar equals P for all X. Does it satisfy these conditions? It's a constant, it's constant. So there's a big difference that you add subjective. It's crucial to get just a semi-conjugacy, a continuous semi-conjugacy, semi-conjugacy. If you don't have any additional information, it can be a trivial statement. Because every time you have a fixed point, you can take any other system and you can just map everything to the fixed point. And it's automatically a semi-conjugacy, because it satisfies this condition, okay? So just a semi-conjugacy, that's why conjugacy in general is much better than a semi-conjugacy. Although in the semi-conjugacy if we have some additional information, so we know that is surjective, means it cannot be this kind of trivial situation. It is mapping to everything. Not only that, but we're also making some statement about injectivity, is that it only fails to be injective on a small set. And on this set it's only two to one, right? So you cannot have the whole set going to one. Because also a map can fail to be injective only at one point of the image, but it's the only point of the image and everything maps to that point, right? Then it fails to be injective only at one point of the image, but it's infinite to one, everything maps there. So what do you think that this set is going to be when it fails to be injective, the set in the image, right? What we're doing is we're going to construct, so this is our map. In this case we would have sigma 4, sigma 4 plus continuous surjective map. When we say here that it fails to be injective on a countable set, of course, it fails to be injective. So we're looking at the image. There's a countable set in the image, right? That has more than one pre-image upstairs. So what is the obvious candidate for this set? Certainly these points, and what? The orbit or the pre-orbits of these points? The images or the pre-images of these points? We'll see, okay? But it's obviously related to the set because this is the set. If we want to do the coding, this is precisely the set in which there will be some ambiguity. When you fall in here, you don't know if you're an I0 and I1. So it's likely to have some two sequences maybe going to the same point in that. But what is important is that it's only on a countable set. So in fact, for most points on the interval, this is actually a conjugacy and this is one to one. So I won't have time to do the full proof, but let me start by, let me just in the last 10 minutes say how we start the construction. So in fact, we will do very, very similar. We will really try to generalize the argument that we had before. We'll try to construct those sets. So first of all, so to construct, we first of all define first a multi-valued map, F bar. And we define it in this way. F bar is equal to, sorry, the first I needed to define. So first of all, for all j equals 0 to n minus 1, let Fj equals F restricted to ij. So we write F as a kind of union of L different maps on each ij. And remember the way I defined it was that these ij's were open, right? But because they are c1 different morphisms from ij onto the image or the cloche, it's a c1 map and a c1 extends to a c1 different. I remember now why I hadn't written c1 different, because it's just a technical issue about exactly what you mean. It's a c, because it's defined on an open set, technically speaking, it's a c1 different morphism onto the open interval. Onto the open unit interval, right? But because we assume that the image of the closed interval is the full interval, then it extends essentially to a c1 different morphism from the closed interval to the closed interval. So we let Fj equals F restricted to ij. And then we let F bar j equals the continuous extension, Fj to the closure of ij. This is just a little technical point that we have to deal with because of the way this is defined. So we have that F hat of the closure of ij is equal to the original interval i. Sorry, Fj hat. Is that clear what I'm doing? I mean, this is a annoying little technical point, and it's really just to formalize something that is intuitively obvious, but to formalize it, we need to write it in these ways, in this way, because F is strictly speaking just defined on the open interval. But each Fj is now defined on the continuous extension. The reason we need to be a little bit careful here is because strictly speaking for the original map F, the original piecewise expanding F, the image of this point in which the two intervals meet, it can only have one image. The map F is single-valued because maps are single-valued. Otherwise, it's not a map. And so the image of this point has to be either here or here, depending on how the map is actually defined. In this case, there's no ambiguity. So in this case, it doesn't matter because on both sides, the image is here. But on a situation like this, it's either here or here. But we also want to give a meaning to the fact that even if the original map F is defined and taking the value zero here, we also want to be able to define what it means to apply the map to the closed interval here. And so we want, in some sense, the image of this point. If you think of this point as belonging to the element I want as being up here. Okay? And all of this is solved by making exactly this definition here. Topology in action. So what we've done now is we've defined a union of disjoint maps, if you want, on these closed intervals, thinking of them for the moment as disjoint closed intervals. Because each of the boundary points has two images, depending on whether you think of this boundary point or possibly two images, one or two images. Depending on whether you think of it as a boundary point of the left-hand interval of the right-hand interval. Okay? And we use exactly this fact to define this multi-valued map. So do we define this multi-valued map as Fj of x. If x belongs to ij, these are open intervals. So the ij, the way I've defined them, so there's no ambiguity there. And as F bar i of x, union F bar j of x, if x belongs to the intersection of the closures of these two intervals. For some ij in zero. So this is where it becomes a multi-valued map. Defining this map F bar for some points has two possible values. And then now with this definition I can actually, this definition is a little bit technical, a bit complicated but it's very useful because with this definition everything else becomes almost exactly as in the previous situation. So what we can do now is we can define. So now for all a bar in sigma n plus, we can let equal by definition the set of point x such that F hat i of x. Rather than writing that it belongs to i ai because this can be, this can have two values because it's a multi-valued map. I would like that this intersects closure of i ai. So notice that if we happen to do all this stuff in the situation where those intervals are actually disjoint, then this gives you exactly the previous definition line. In the case in which we have a union of disjoint closed intervals, then this intersection never occurs because the closure of these intervals is disjoint. So we have that F bar of x is just equal to Fj of x is single valued everywhere. And therefore these are closed to begin with. So the fact that the section is non-empty just means that Fi of x belongs to i ai and we have exactly the same definition as in the previous case. So this is exactly a generalization of the previous case because it coincides with that in the case in which these intervals are disjoint. Okay, so the rest I will do, I will do the next lecture, but it follows in very similar way. We will show that this is a non-empty and consists of a single point and then study the lack of it. So this allows us to define the map H because to each A bar, we can define a single point here and so on and everything else will be fairly straightforward actually after we have fixed these definitions. So try before the next lecture to think a little bit about this case. So the fact that the notation is a little bit complicated to become familiar with here, okay, to deal with these intersections. Okay, thank you very much.