 So, at the end of the last lecture, we finished with defining the notion of a dynamical system as a group of transformation of a set X, right, remember? So this could be either a group in continuous time, for example, what we get from the flow of an ordinary differential equation. This was the starting point, right? We realized that when you have an ordinary differential equation that has existence and uniqueness of global solutions, then what you get out of that is a flow, which is a group of transformations of your space parameterized by R, okay, that gives you a group of transformation. We saw also that you can have a discrete time dynamical system, so this is discrete time. And for example, you get a discrete time if you just take a flow and you restrict to the integers, okay, that automatically gives you discrete time. So for example, the time one map of a flow gives a discrete time, but also more generally, if you just take any invertible map of any set and you iterate, you consider this as the iterates, then you get a discrete time disk dynamical system because you get a group of transformations, so iterates of any invertible on an arbitrary set that does not need to have any structure at all. So this is a very general notion. The fact that both of these classes of systems, which are priori are very different, can be thought of in a unified way, allows us to develop a theory that considers them together, so the theory of groups of transformations in some sense. In some parts of the theory, we will distinguish some results hold for discrete time and not for continuous time and vice versa, but generally the formalism, the notation and even many of the results hold very generally for group of transformations, okay. This generality also allows us to consider even more settings, right. For example, we could take complex time, okay. The complex numbers are also a group and you can also think of groups of transformations that are parameterized by complex parameter and get complex time, or you can have more general so-called group actions. Sometimes you will see in the literature there's a big theory of group actions, it's a generalization of dynamical systems in which you have more general groups here, right. Because the structure of the group of transformation is in somehow, and the composition is somehow inherited from the group structure of the parameter space here, okay. So we will not discuss this here, but I just want to comment on this, that there's further ways in which to generalize this formalism here. One way which we will use is the situation in which F is not necessarily invertible. So if F is just an arbitrary map, so not necessarily invertible. So you know of course that invertible just means bijection, right. So F is invertible if it's a bijection, because if it's a bijection, then you can just consider the inverse. If it's not bijection, particularly if it's not injective, or even subjective, you cannot talk about having the inverse, okay. But you can still iterate it forward, okay. So then we get, then we can iterate forward, and we have the family F, N, N in N is still defined. We have all the positive iterates, and this also satisfies the properties of the group, except for the existence of the inverse. So I include zero, of course, in here, right. So we still have that F zero by definition. We consider the identity, and then we look at all the forward iterates. And so this is called a semi-group of transformations, which is a kind of a special case of this. So this is really, really very general, because you really make no assumptions at all, not even the invertibility. And you get some semi-group of transformations which actually is remarkably interesting and rich. In fact, the richest theory, in some senses, the theory of just iterating dynamical systems that are not invertible, and that gives a lot of, and we will see many examples. So in this course, we will concentrate on the discrete time case. This is also discrete time, right. Discrete time. Because in discrete time, it's much less technical, the subject. In continuous time, for flows of ODE, there's a lot of, you need to deal with vector fields, and the solutions are curves, and there's a lot of geometry involved, and it's technically much more complicated. But a lot of the concepts and the results, as I said before, are similar. So we will deal with the discrete time case. There is a hugely rich theory. There's many open problems. It's a very, very active area of research. It is my own area of research. And in this course, we will discuss some of the foundations of these areas of research in dynamical systems. So the first problem we have, now that we have defined the notion of a dynamical system, is of course, what is the object of the field? What do we want to study? We want to study such dynamical systems. What does it mean? We want to somehow describe such a dynamical system. We want to look at some examples. To study it systematically, we want to be fairly systematic in developing some tools to say what are the aspects of this group that we're interested in studying, and we want to address the problem of the classification of dynamical systems. So description of the world of dynamical systems goes hand in hand with the problem of the classification, because we want to look at two systems. We want to say what they have in common, in what ways they're different, and so on. We want to generally address the problem of understanding the space in some sense of all dynamical systems. OK, so we're going to start by defining the fundamental notion for discrete time, which is just simply the analog of the notion of a solution for continuous time. So let me start with some fundamental definitions. For the moment, for now, on x is just an arbitrary set, and f from x to x is an arbitrary map. So it's an extremely simple setting. Just starting from this, what can we define? Well, for x0 in x, we can define the orbit of x0, the forward orbit, defined forward orbit, to be the set of all iterates of the point x0. So if we have our set x here, this is a point x0. So now we're doing exactly the reverse of what we did to get this. Remember, when we developed this formalism, we started with saying, OK, we have an ODE. Each point has a solution, and then we look at all the solutions simultaneously, and we got this. Now, once we've got this, we generalize this idea to the discrete time case, and now we're kind of going back to defining solutions. This is a solution, if you want. It's an orbit. So you take x0, and then you apply f. You get f1 of x0. f of x0 is some other point, because this is just a map from x to x. And then you apply f again to this, and you get another point, which is f2 of x0. And then you apply it again, and you get another point, which is f3 of x0, and so on. And you get a countable, at the most countable set of points. Sometimes for simplicity, it would be useful to write this just as xn. So we just write xn equals fn of x0. It would be more convenient often to write it. So if f is invertible, we can also take the inverse. If f is invertible, we can define the full orbit theta of x0 is equal to fn of x0 and in z. Or we just write it xn and in z. So you can go backwards. So if f is invertible, we can go to x minus 1. x minus 2, and so on. So what can we say about these orbits? So in some sense, the description what we'd like to say when we say we describe a dynamical system is we'd like somehow to describe the structure of these orbits. So what kind of structure can these orbits have? Well, at the moment, we have very little to go on, because we have no structure on the set x. In a little while, we'll put a bit of structure, and we'll be able to formulate things in terms of that structure. But even without structure, there are some very special kinds of orbits. Can anyone think of what would be a special property of such an orbit? Sorry? Exactly. So in some cases, it can be that this comes back on itself. So the simplest case is when we have a fixed point. So if f of x0 equals x0, then x0 is a fixed point. In this case, the forward orbit, what is the forward orbit of x0? The full forward orbit, the set of all iterates of x0, is just x0 itself. Because of course, fx0 maps to x0. So the next iterate is x0. So this is just equal to x0. If f is invertible, then this is actually also the full orbit, is also just equal to x0. If it's invertible, then it has to be a fixed point in both directions, both forward and backward time. So if there exists some k greater than or equal to 1, such that fk of x0 equals x0, then we say that x0 is a periodic orbit, fixed point, periodic orbit. So this means that you have x0, x1, x2, x3, and then you go back to x0. In this case, k equals 4. f, f2, f3, f4 is equal to x0. So in this case, the periodic orbit, or more generally, let me write it as xk minus 1. This is the periodic orbit. So notice, of course, that if fk of x equals x0, then also f2k of x0 equals x0. You agree? Clearly, right? Because all you're saying is that if after 4, it lets you come back. After 8, it lets you also come back. So the period of an orbit is not uniquely defined. And if an orbit is periodic of period k, it's also periodic of any multiple of k. Obviously, it comes back. So we define the notion of a minimal period. We talk about the minimal period is the minimum k greater than or equal to 1, such that fk of x0 equals x0. So generally, when you say this orbit is periodic of period k, we mean the minimal periods. Sometimes you have to be careful. Generally, it's clear that you mean the minimal period. So of course, a fixed point is a special case of periodic orbit. For k equals 1, the fixed point is a periodic orbit with minimal period 1. So can we have points that are not periodic? Or do you think every point is periodic for a dynamical system? No, not every point is periodic. Of course, we don't have to have periodic points. And we will see lots of examples. However, even just already this, just having defined the notion of orbit, allows us to define a first notion of equivalence between two maps at a very basic level. So how do we say the two? We have one map here, f from x to x. And then we have another map here, g from y to y. No particular structure. We would like to say that these two are in some sense corresponding to each other. So how can we do that? Well, one way we can do that is to say, does there exist a bijection, h bijection, that sends all the orb that matches up the orbits of one to the other? So in some sense, the most basic notion of two dynamical systems being equivalent is that they have the same orbits. So if this system has a fixed point, and this has no fixed point, clearly you want to say they're different. If this exactly all the periodic points of this match up exactly with all the periodic points of this, at least they have something in common. And you would like to do more than that and to say that every orbit of this, in some sense, matches up to every orbit of this. There's a one to one correspondence between the orbits. So how do we define that? Well, we want a bijection. And what do we mean by matching the orbits to orbits? We say, well, we have a point x here, x0, that maps to x1 equals f of x0 here. And then if we have a bijection, we look at the image here of h of x0. Let's call it y. And what do we mean by this bijection mapping the orbits to orbits? This bijection should preserve the orbits so that if it maps x0 to y, it must map f of x0 to g of y, clearly. So here we have some point g of y, which is just the image on the g. And for this bijection to do what we want to do, it has to map these points to these points. So we define it this way. We define the first definition of equivalence of two dynamical systems. So definition, f x to x and g y to y conjugate if there exists a bijection h from x to y, such that, so what we want here is that if you take f and then you take h, so you take h composed with f. So first you apply f, then you apply h. It's the same thing as applying h and then applying g. So it's the same thing as g composed with h. This is exactly the conjugacy condition. Notice that, in fact, this maps all the orbits to orbits. So just in case you're not sure, because h is a bijection, you can invert h. So this means that you can write f is equal to h minus 1 composed with g composed with h. So this is a conjugacy. I'm sure you've seen other similar conjugacy. This is a standard way of formulating equivalence in algebra in many branches of mathematics. And if you do this, then you can write f n. You can look at the iterates of f n. This is equal to h minus 1 composed g composed h, iterated n times. So let me write it like this. h minus 1 composed with g composed with h and so on. h minus 1 composed with g composed with h. So this is just a composition of functions. And you can see that here you have h and h minus 1, h and h minus 1, h and h minus 1. So all these h and h minus 1 cancel. And what you get is h minus 1. And then you get all these composition of g's. So this is exactly equal to h minus 1 composed gn composed with h. So this is actually formulated in terms of a single literate. But it immediately implies the same for all literates, clearly. So all it means is that this continues to hold for any iterates. So if you take x0 and you look at the 27th image of x0, and then h maps this 27th iterated to the 27th iterated of y, which is the image of x0. So it really maps the orbits to the orbits. So why is this reasonable? What do we need besides the fact that intuitively this matches up the orbits, what property does a suitable notion of equivalence have to have? There's a very important property. Sorry? What do you mean by preserving the stability? For us to use this, for us to use this to say, OK, this is a good notion to define the fact that two dynamical systems are equivalent. What? OK, so this holds, right? I mean, this is clear. OK, I didn't comment. It's clear. Then you can check that, of course, because of this, it preserves all the periodic points, right? So the definition is conceived precisely so that the orbits map to orbits, and they must have the same structure in the sense that at least periodic points have to map to periodic points. You can check this very easily, OK? It just follows from this immediately, because you have to map. But that's not what I'm saying. What I'm asking is, OK, intuitively, this is a natural notion of equivalence, at least a basic notion. To formulate, if you think of it more globally, you say, OK, now look at the space of all dynamical systems, OK? And look at them through this notion of conjugacy. Does it divide the space in a good structure? In other words, is it an equivalence relation, OK? That's what every good notion of equivalence needs to be. It needs to be an equivalence relation in the sense that if A, if X, if F is equivalent to G and G is equivalent to some other map, then F must be equivalent to the third one, right? This is the fundamental notion of equivalence, is that they have to be an equivalence relation. In particular, it has to be transitive in this way, OK? Because that means that it divides the space of all possible maps into equivalence classes. So this is easy to check that this is an equivalence relation. I left it for you as an exercise, so I'm not going to do it because it's almost trivial. You just literally just check it, yeah? Because if you have a third, if you have a bijection that maps to a third set, then clearly the composition of these two bijections gives you a bijection between the two, and you can easily check that it satisfies the conjugacy equation, right? So conjugacy is an equivalence relation. So this means that you can take the set of all dynamical system, all maps on all spaces, all maps F, X to X on every possible set, and it defines a partition of the set into, in this case, obviously an infinite number of equivalence classes, very large number of equivalence classes. And all the maps inside each equivalence class are all conjugate to each other. So in some sense, if we want to understand one map, we just understand one map in the equivalence class and all of them are to some extent equivalent to that. Excuse me? We don't even need X to be fixed, actually. That's a very good question. But of course, within each equivalence class, X and Y have to have some relation, because they must at least exist a bijection between them, right? For the completely formal definition, we've not even specified X might be a finite set even, or it may be an infinite set, or it may be a topological space, or it may be whatever, OK? But of course, for this definition, they must exist a bijection. So if these two are finite spaces, they might have the same cardinality and so on. So it's still taken care of by the fact in this equivalence relation, we don't even need to fix the space. Or if you want, we can decide to fix the space, which is what often you do. You fix the space, and you look at the set of all maps on that space, and this still gives you an equivalence class on that space, OK? So there's many ways in which you can define this. And in fact, what I'm going to do now is I'm going to discuss all the equivalence relations, stronger equivalence relation, because as we'll see in several examples, this conjugacy is actually a very weak equivalence relation, in the sense that it shows the two systems are equivalent that when you look at them, you say, no, this doesn't make sense, they shouldn't be equivalent. For example, you have a system where everything maps to one point, and a system where everything moves away from that point, and this definition shows that they're equivalent, and that's not right. We want a stronger notion. So this is what I'm going to define now, OK? I'm going to, but I wanted to, this is the fundamental notion of conjugacy, and we will build on this. So the way we will build on this is because we are interested in more structure, OK? So the only structure that really this preserves is orbits to orbits, but we don't have much information, for example, about non-periodic orbits. And to do that, we need a little bit more structure on the sets x and y. Otherwise, we have nothing to go on, OK? We need at least, for example, a topological structure. So now we assume, suppose x is a topological space. And since it's a topological space, let's assume that f is a continuous map, even though this is not necessary, OK? But it's natural to consider continuous maps on topological spaces. So of course, this is just a stronger condition than what we had before. So everything that we discussed before still applies. We still have orbits. We still have fixed points. We still have periodic points. But we can study the structure of non-periodic points now. And how do we do that? Because we look at the accumulation points of this. So we define, let x0 in x be some initial condition. And let define what we call the omega limit set, omega of x0, is equal to the set of points z in x, such that f and j of x0 converges to z for some subsequence nj converging to infinity. So what does this mean? It means we're trying to understand what this non-periodic orbit does. We don't know anything about what a non-periodic orbit does. Or we know, so this is a point x0. If x0 is periodic, then it will come back. If x0 is not periodic, then the orbit is not a finite set. So if x0 is not periodic, then the forward orbit of x0 is countable. So it's infinite. This is an easy exercise. It's also in your exercise sheet to see that if the forward orbit is finite, then it must be periodic point, clearly. The only way it can be finite is if it comes back on itself. If it's not finite, it's countable. So the question is, how do we describe a countable set in this space? And that's why without any structure, we can't really describe this countable set. It can be whatever we have now. With the topological space, we can describe, basically, the accumulation points of this countable set. We have some countable set here. We don't know where it is. But it could be that it has a unique accumulation point. So it could be that it has a limit, that this countable set is converging to the point z. In that case, the omega limit would be just exactly this point z. But it could be that it's converging to some periodic orbit, or it could be that it's some converging to some strange set. What does it mean, converging? So what this means is that each of these z is an accumulation point. So after sometime this comes close to z, then, so maybe z1, z2, and then after some more time it comes close to z2. And then it comes arbitrarily close infinitely often, both to z1 and to z2, in which case both z1 and z2 are in the omega limit set of this point. So all the topological accumulation points of the sequence, simply. I'm sure you've done this in your topology course. And so this gives some kind of description of this non-periodic orbit. It tells you where this non-periodic orbit is accumulating, where this countable set is getting arbitrarily close to. It can even be the whole space. We will see some example. So this countable set could be dense, in the same way as the rationals are dense in the real line. You can have a countable set which is dense in your space. And this can happen dynamically that you have this orbit. So this is a very, very important concept which we will use a lot, the omega limit, because it's a fundamental way to extend the description of the orbits to non-periodic orbits. So again, if f is invertible, we can also look at a similar concept for forward time. So if f is invertible, we can define the alpha limit, alpha of x0 is equal to z in x. So I said f and j, f minus nj of x0 tends to z for some subsequence nj tending to infinity. It's the same thing. Yes. It exists, but it could be multivalued. What is the preimage? So you're right. This is an interesting comment. So when we define f minus 1 of x, we can define it as the set of points z such that f of z equals x, right? Sorry? Yes. The problem with the lack of invertibility is that that's right. It might be empty. It might contain more than one point. If it's invertible, then this contains just one point always. And so this is well-defined as a map, OK? If this contains more than one point, then you don't know what the image under the inverse is of x. If you have a map where you have two points going to the same point that's not injective, then what is the inverse? The inverse image of this point might be this or it might be this. You're not sure. You're right. No, that is a very good comment. So of course, you can also extend these concepts. So you can decide to say, OK, I will look at this. I will look at that. You can look at the backward orbit. And in some context, even we will do that. So in the next course, in the course that we will do in the algorithm theory course, we will use these pre-images a lot and we will study these pre-images a lot, OK? So it's just a decision for the moment now to decide to define these for forward time always and to define these just for invertible time. Because then we can work with them in a certain way. Then you can have for your project, you can say, OK, can we generalize this notion to a non-invertible case? Yeah. No, this is always doing mathematics, right? And particularly what we're doing now, always things can be generalized if you want to, OK, in many, many ways and in many, many cases. So these concepts, you could try to generalize them and people have tried in various settings. Sometimes they work, sometimes not so well. For the moment, we will only define it in invertible case. OK, so I leave it as an exercise for you to study certain properties of the omega and alpha limit case. For example, the fact that if the space is compact, then the omega limit set is always non-empty. Can you see why that is true? Exactly, compact means every sequence has a converging subsequence, which is exactly what this says it is. If it's non-compact, this might be empty. Can you think of some setting where this might be empty? Exactly, if you have, for example, if your set is the real line and you just have a function that the orbit is just going to infinity, then, of course, there will be no accumulation points. It will not accumulate. It's just going to infinity, so it will be empty. And then there's various other properties such that it's always closed, that it is invariant set. So if you take a point in the omega limit, its image is also in the omega limit, and various other properties which are very useful for you to study. So the best thing is for you to do this exercise in this sense. So now that we have added this structure, we want to strengthen this notion of conjugacy. And we can strengthen it very simply by saying, by requiring this bijection to actually be a homomorphism and preserve the topological structure. So we say definition. f x to x g y to y are topologically conjugate if there exists a homeomorphism h x to y, which satisfies exactly that property. So is it true that if two systems are topologically conjugate, then they're conjugate? According to the standard definition that I gave before, clearly. This is a much stronger condition. So two exercises that I've put in the books, one of the exercises to show that this is also an equivalence relation. This is also very easy to see that this is an equivalence relation, because if you have two systems, f x to x g y to y, and some other map fj l from z to z. So the fact that these satisfy this conjugacy condition means that there's some h1, homomorphism h1, that conjugates them. If these are topologically conjugate, there's another homomorphism h2 that conjugates them. And then you can just check. It's a simple calculation that the composition of these two gives you a homomorphism between these two that makes an equivalence. So this is an equivalence. Sorry? Yes, from x to y. Thank you. And the second thing is that this equivalence relation preserves a little more structure than this one. So exercise, if f and g are topologically conjugate, then the image under h of the omega limit of some point x0 is exactly the omega limit of h of x0. So you have the conjugacy. So the conjugacy, we know maps periodic orbits to orbits. It maps periodic orbits to periodic orbits. It maps orbits to orbits. What we're saying here is that also preserves omega limit set. So you take a point x0. This x0 has some kind of omega limit set here, omega of x0. Then you take the image y equals h of x0. This has some kind of omega limit set, omega limit set of y. So the conjugacy maps this omega limit to this omega limit. It preserves the omega limit sets. So this is a significant amount more of structure. And as we shall see, this makes this much more interesting and much more natural equivalence relation to have. Because if you have a conjugacy that does not preserve the omega limit set, it's like saying, OK, you match up all the orbits. But here you have an orbit going here. And it maps to an orbit that in this system goes somewhere completely different. Completely different. So matching the corresponding omega limit set is an important point for an equivalence relation. OK. So two more definitions that use this omega limit set, which are useful. So suppose we have a fixed point, fp equals p. Then we say that p is attracting if there exists a neighborhood of p, a neighborhood u of p, such that omega x0 equals p for all x0 in u. So this means this is our space. This is our point p. If there exists a neighborhood u of p, such that for all points here, after iteration they converge to p, then p is called an attracting fixed point. It's very natural. It means all the points in some neighborhood they converge to p. p is the only accumulation point of the sequence x. And if f is invertible, then we say that p is repelling. If again there exists u neighborhood, such that alpha limit of x0 equals p for all x0. So attracting and repelling fixed points. In one case, it means that all the points in neighborhood of p are converging to p. In the other case, it means that all these points in the neighborhood are moving away from p. So again, your comment that you made before is important. So this notion of repelling can be generalized, formulated in a different way. Even if the system is invertible, is non-invertible, you can talk about a fixed point that points are moving away from this fixed point. In the examples, we will see it very clearly. But for the moment, for the sake of these formal definitions, it's easier just to assume that it's invertible in this case. Do you have any questions? This equivalence relation is a refinement of the previous one, right? So you take one equivalence class of the previous conjugates class, and inside that, you split it up further into these topological conjugacy classes. We can do more. Can you see ways in which we can make even stronger conditions on the equivalence between two systems? In the same way that we strengthen the notion of equivalence by assuming that h is a homeomorphism, we can assume more on h. We can assume that h is a different morphism. If the spaces allow it, right? So the spaces need to be a bit more than topological spaces. For example, if the Euclidean spaces are n and are n, then we have the notion of a different morphism. And we can say, OK, these two maps are c1 or differentially conjugate if this is a different morphism. If these are rn, we could even say, OK, we want h to be a linear map between these two. These are stronger and stronger notions of equivalence. In fact, in some examples, we will look at them. So depending on the setting, various notions of equivalence will be natural and some will be weaker, some will be stronger. It turns out that the differentiable conjugacy is very, very strong. It's too strong in the sense that if you take two systems, it's almost impossible for them to be differentially conjugate. They really have to be really almost the same system. And I will look at some examples. Whereas as I said before, the standard conjugacy is so weak that you can have systems that really are very different still being conjugate to each other. And it turns out that topological conjugate is in some way the right compromise. It preserves a certain amount of structure, which is basically this, which allows systems to have some flexibility and still be conjugate. And on the other hand, all conjugate systems really have some similarities between them. So this will be the fundamental notion of equivalence that we will use. So I almost finished this kind of general introduction of all the fundamental concepts. Just one more, I want to say, which is very important and very interesting and also motivates this. Is once we have this notion of conjugacy, which means a notion of two systems being the same, we can talk about what happens when we perturb a system. So suppose you have your dynamical system and you change it a little bit. Then you can say, OK, is your new system conjugate to the old system or not? In other words, has your change produced a real change in the system or not? This is one of the fundamental ways in which this notion of equivalence is used. Let me try to formalize this. So let's consider this, for example, the space of all maybe continuous or not. Doesn't matter, continuous maps on some, perhaps, topological space X. For example, let's suppose that we fix a set and we look at the space of all continuous. I put this in brackets because for the general definitions is not important. We can either do it in generality or for continuous maps on topological spaces. And then we take a family of maps. So let's suppose we have a one parameter family of maps. So let f of lambda, for example, lambda in R be a family of maps. Not to be confused with the parametrization of the dynamical system. Each of these is a map. So each of these f lambda is a map from x to x. So this is a map. This is not the phase space of the dynamics. This is the space of all maps. So I can ask the following question. I can take some f lambda 0, and then I can say, OK, if I take another lambda very close to f of lambda 0, will these be topologically conjugate, for example? So if they are topologically conjugate, this is very interesting because it says that this map is robust, is stable in some ways. Things do not change when you perturb it a little bit. If it is not the same, this is in some way even more interesting because it means something significant has changed. Maybe the new map has some different periodic orbits or some different omega limits or something has changed so that they're not conjugate. If you cannot find any neighborhood of lambda 0 in which the nearby maps remain equivalent to each other, then it means that it is very structurally unstable. However small a perturbation you make, something changes. So this is the notion of structural stability. So we want to say that a map is structurally stable. If when you perturb it a little bit, it does not change its conjugate class. This concept was developed in the 1930s or so in view of the idea of dynamical system as modeling real life phenomena. One of the main motivations for dynamical system is to motivate real physical objects, like the weather or reactions, chemical reaction, mechanics, engineering. This is the origin of differential equations and of dynamical systems. So usually when you model something, you do not know the system exactly. There's always a little bit of error, a little bit of approximation. In various things, there's an approximation in the way you set up your differential equation to describe your system. There are some errors in the way you measure the system to define your initial condition. So there's a lot of measures. There's a lot of errors. If your system is structurally stable, then you say, well, maybe it doesn't matter so much that there's a little error, because the system model I have made, even if I change it a little bit, I will get something that is at least topologically conjugate, at least in some ways the systems are the same. So the notion of structural stability is supposed to formalize this idea that what you would like from a system is structural stability, because it means that the fact that you got exactly the right system is not crucial to your description. So for the more precise and formal way of defining structural stability, we really need to take, rather than a family here, the topology on the space of all systems. So suppose we have a topology. Suppose we have a topology on the space x of all maps, then for any given f in x, we can ask if f belongs to the interior of its conjugacy class. This is the formal way to formulate the notion of structural stability if it belongs to the interior. So here we said that we have our conjugacy classes. So if a point f, so we have a topology, what does it mean, the topology on the space of dynamical systems? It means that we know what it means to be close. We know what a neighborhood of a dynamical system is. So we know what it means to be the interior. Interior means there's a neighborhood that's all contained in its conjugacy class. So if you take a map f and it has a neighborhood that is contained inside its conjugacy class, which means that all the maps in that neighborhood are all conjugate to the original map, then this map is called structurally stable. If so, if it is, we say f is structurally stable. If it is not, if it's on the boundary, then that means that however that you can find, of course, generally it means that you can find some perturbations that remain inside the conjugacy class, but you can find arbitrarily small perturbations that map it that are outside the conjugacy class. So they belong to a different conjugacy class. And that means that there's a bifurcation. So we call it a bifurcation is when as you move across, if you take a family of systems that move across two conjugacy classes, then in the middle you get a bifurcation. Because on one side, you have one conjugacy class. On the other side, you have another. And then you have to study how that bifurcation happens. It changes the topological structure of the system. We will study for the whole course. We will focus on this. So we will apply these ideas systematically. So you will have time to get used to them and to understand them better. At the moment, I'm leaving it very abstract, but I want to explain what the fundamental point of view of this course is going to be. So notice, of course, that this notion depends on two things. It depends on the notion of conjugacy that we use. Because the notion of conjugacy defines the conjugacy classes. If we use simple conjugacy, we get a certain set of conjugacy classes. If we use topological conjugacy, we get a refinement of those. So we get much smaller conjugacy classes. If we use differentiable conjugacy, we get even smaller conjugacy classes. So the stronger the conjugacy and the more likely it is that you're on the boundary of one of these conjugacy classes, so the stronger the conjugacy and the more difficult it is to be structurally stable. Because you want to be the interior. And the second thing, of course, it depends on the topology that you have on the space of maps. Because the topology means what we mean by a neighborhood. And again, we will see several examples where this depends on the topology. Once you fix the conjugacy class, you can have one topology for which some system is structurally stable, but for a different topology is not structurally stable. Because it depends on what you mean by a neighborhood. For one topology, this must be a neighborhood. And for another topology, this is a small neighborhood. And you can't make it any smaller than that. So this is really the abstract setting. And now let's just take a two-minute break. And then we will come back and start our systematic study. OK, so after all these fundamental definitions and concepts, we're now ready to study what we promised, which is the systematic study of certain classes of dynamical systems. We will start with the simplest possible class. And I want to try to use it to illustrate these ideas of various kinds of conjugacy and structural stability and so on. So what is the simplest class? So one-dimensional linear maps, OK? So this means the class of linear maps, of maps A from the real line to the real line, given by A of x equals A x for some A in R. Yeah, linear. Yeah, linear in the sense that in this one-dimensional case, it's just almost too trivial to even give a definition. But we will do some two-dimensional cases, yes, in which the definition of linear is a bit more non-trivial. So what is the dynamics of such a maps? Such maps. Clearly, it depends on A. And it depends. Different things happen for different A's. So we have, so notice first of all that for A equals 0, A equals 0 is a very special case. What happens for A equals 0? Everything maps to 0. Every point maps to 0. So this is a real line. This is 0. This is R. This is x. We apply A maps to 0. Everything after one-dimensional maps to 0. 0 is a fixed point. It's not completely trivial, right? But is it a fixed point? It absorbs everything immediately. Is it invertible, this map? It's not invertible. Everything maps to 0, OK? What about for A different from 0? Suppose A equals one-half. So there's different classes. So in fact, I'm not going to go through them now, because once you start getting the hang of it, it's fairly elementary to look at all the different cases. So I will leave it as an exercise. But just to help you, so A n of x, of course, is equal to just A to the power n of x, OK? So you can very easily see all the cases. So if A is between 0 and 1, this is tending to 0 in forward time. So you actually get that x is moving towards 0. In backward time, when n is negative, this is tending to infinity. So if A is between 0 and 1, and it's positive, then it's going all towards 0 in forward time, all away from 0 in backward time. If A is between 0, between minus 1 and 0, then it's a little bit more tricky, because it is switching sides all the time, right? Because the sign of this is changing all the time. If this is negative, depending on when n is positive or negative, but it's still converging to 0. It's still going like this, right? And when n is bigger than 1, you have certain cases. When A is equal to 1, what happens? It's the identity, exactly. And so every point is fixed point, the identity. The identity map is a very interesting counter example to a lot of things, OK? So what about when A is equal to minus 1? That's right. So every point just switches back and forth like this. It's period 2. Every point is periodical period 2, except for the fixed point, right? OK, so I will let you study these. This is an important exercise, OK? Even if you can see it, write it down. Make sure you understand all the different possibilities for A, because there's not that many possibilities. And what happens to x and the omega limits? You can study this by hand completely, OK? And I will use that fact, and I will use these properties of the map. So for notation, what we will use is notice that we say that A is, again, invertible. It's invertible if A is different from 0. And also, it turns out that the case 1 and minus 1, as you've already seen, are very special cases. And it will be useful to use this notation, which is this terminology, which is not completely justified now, but it will be justified later. A is hyperbolic if A is different from plus or minus 1, OK? It turns out that these are the three, really only the three special cases, and everything else is basically fairly straightforward. So we would like to study the conjugacy problem, the classification problem, OK? Remember, I use this example as the first fundamental test case to see, so how many different linear maps are there? There is an uncountable number, OK, because they're parametrized by R. But you can immediately see that many of them have very similar dynamics, right? For example, what we just said before, if you take A between 0 and 1, you just said, OK, if A between 0 and 1, then everything does the same thing. So that indicates that you would like to kind of say, OK, all the linear maps with A between 0 and 1 should be conjugate, at least in some way, OK? So this is the key point of the conjugacy and the classification, is that you reduce what initially looks like an uncountable number of different cases, because they're different to just a few equivalence classes. And then as long as you study the properties of these few equivalence classes, you have essentially understood, to some extent, all possible linear maps. So and this also will be a very good example to study the difference between normal conjugacy and topological conjugacy, OK? Because for conjugacy, for the simple conjugacy, we'll have the following result. So proposition, any two invertible hyperbolic one dimensional linear maps. Now, it is not true, of course, that they're all topologically conjugate. Why is it not true that they can be all topologically conjugate? Can you see? Why? Exactly. What do you mean by the omega limit is not the same? That's right, because if you have two maps, so you have one map here and another map here, right? This is A and this is B, right? Then if A equals A of x and B equals B of x, if A is between 0 and 1, all the points in forward time converge to 0. So 0 is the omega limit set of these points, right? And here if B is bigger than 1, then all the points move away from 0 in forward time, OK? And remember the property of topological conjugacy is that it preserves omega limits. So it maps points to points and it maps omega limits to the corresponding omega limits. So this cannot be topologically conjugate because whatever point this maps to, in forward time, this will converge to 0 and its image will converge to infinity so it does not map omega limits to omega limits, OK? I will say this more systematically, but I just wanted to point this out at this moment. So this is a more weaker form of conjugacy. So we're going to study both cases. We're going to show that any two invertible hyperbolic one dimensional linear maps are conjugate, which also shows what a weak definition this is, right? Because you have here maps where everything converges to 0 and maps where everything goes to infinity are conjugate, OK? So it makes everything into a single equivalence class except for these three cases, which is good. But on the other hand, this equivalence class contains maps which we don't really like to think of as the same. This cannot be topologically conjugate. Of course, what will be important to show is whether if you take A and B both between 0 and 1, say, so that it looks like they have the same behavior, we will need to prove that there's topological conjugate and that's not severe, OK? It will need some work and that's what we'll do. So first we do this, which anyway is the first step towards constructing the topological conjugacies anyway. And I want to introduce the method, which is very interesting. We will use an abstract method for this, which is called the method of fundamental domains. We will use the method of fundamental domains. So let me define this idea, so definition. Let x be a set. Let x prime in x be a fully invertible map. x prime in x, an invariant set, invariant subset. This means that f of x prime is equal to x prime, OK? So we have our set x. We have some subset x prime. So f of x prime equals x prime. Notice that this is just a little bit for generality. If you have that f of x prime is equal to x prime, it means that you can just restrict yourself to x prime and it doesn't matter what happens outside. So you can either take x prime equal to x or you can apply the definition to any invariant subset of x. And then you just look inside x prime and you don't see anything else that happens. Now we say that some subset u in x prime is a fundamental domain for x prime if for every x in x prime, there exists tau equals tau of x in z, such that f tau of x, there exists a unique tau, such that f tau of x is in u. So it's very simple. I say, OK, can I find some subset u, which has the following property. Every point of x prime in its orbit lands once and only once inside u. The orbit of every point intersects u in a single point. It's like saying that it's a gate. Every orbit has to pass through this gate. And it spends only one day there. OK, it cannot spend more or less time. It's a fundamental domain. What it means is that if you look at what happens here, it kind of means that the images and pre-images of u cover all this joint and cover x prime in some sense. We will see in this example how we'll use these fundamental domains. In fact, we'll construct some fundamental domains here. But first, we'll show that if you have a fundamental domain, then this is very powerful. Not always can you find a fundamental domain. A fundamental domain is a highly non-trivial object. In generally dynamical system, you might not be able to find a fundamental domain, at least one that looks good geometrically. You might describe one abstractly, but it's not easy. If you have a fundamental domain, then you basically have a conjugacy. So lemma, suppose f x x g y to y are two maps. Suppose x prime in x and y prime in y are invariant subsets. And suppose u in x prime and v in y prime are fundamental domains. If there exists a bijection h tilde from u to v, then f restricted to x prime and g restricted to y prime are conjugate. Yes, sorry, yes. So the proof is simpler than it might seem. But it's quite remarkable because all I'm saying, so the secret here is that all the structure is just contained in the existence of fundamental domains. That's why I told you that the existence is non-trivial. Here I'm saying if two systems have two fundamental domains, just simply such that you can define a bijection, then they're conjugate. So here we have x prime. Here we have our y prime. Here we have our fundamental domain u. Here we have our fundamental domain v. And we have a bijection h tilde from u to v. We want to construct a conjugacy. So we want to extend this bijection to a bijection of x prime to y prime in such a way that it satisfies the conjugacy condition. And this is what we want to do. Well, easy. So let's do it like this. So we want to define an h. So let's define it like this. Take an initial condition x here. After some time it will fall inside you. First, let's define h on u to be equal to h tilde. So we extend this. Now we take a point x here. And what do we know? Because this is a fundamental domain, after some time it will land in here. This will be f tau x of x will be here. The unique tau. So this tau is well defined. Once it's in here, we can map it to the other side. Well, we made it to the other side. That's a start. Now what is the natural thing to do to decide what the image of this x is going to be? Now we're on the other side. What are we going to do here? So this is h of f tau of x. What's your suggestion, Maria? What would you do? Well, look, if it took us, it's like imagine that this is a river. This is the bridge. This is the only way you can cross the river. But here you've got a whole population, different villages. From these villages, you go to these villages, which is where the bridge starts. You cross the bridge, and then what would be the village kind of corresponding to this one? Well, you go back the way you came. So if you iterated forward by 27 iterates, now we're on the other side. Let's go backwards by 27 iterates. If we call this point y, let's look at g minus tau of y. And let's define this point to be the image of x. And let's see if it works. So more specifically, for all x in x prime, we let h of x by definition equal 2. So first we apply f tau. So let tau equals tau of x, f tau of x, and then we apply h tilde, and then we apply g minus tau. The same tau, same tau. Notice that. So this will be a bit of a magical calculation I will do now, which will give the result. You have a question? What is your doubt? They're both invertible. Yes, they're both invertible. So notice that what is the return time of f of x? What is tau of f of x? If x has the return time tau, what is the tau of f of x? tau of f of x equals tau of x minus 1, because you're one step closer. So suppose you take, you need 27 steps to get here, but now you start from f of x. How many steps do you need to get here? 26. So it's very simple. Also, if tau is negative, remember, when I say tau can be positive or negative, so here intuitively I thought of this going forward in time, but this could be backward in time, but it's the same thing. If this is negative, then you are going forward, and you add one to the negative number of it. Let's think about it a little bit. So notice that this, so check. And then using this fact, then we just get this, then it's just a one line. So we want to check h composed with f of x. So what does this mean? We're trying to prove the conjugacy equation, so we want to show that this is the same as g composed with h. So we just check this. Now we apply the definition. So this is g of minus tau of f of x, because these are all tau of x. Composed with h tilde. Composed with f tau of f of x in f of x. This is just the definition where I put f of x instead of x here. So where I have x, I put f of x. This is just the definition. And now I use this fact here. And I will write this like g of minus tau of x plus 1 composed with h tilde, composed with f tau of x plus 1 tau f of x plus 1 in x. So I just put this f of x, I combine it here. Because this is f of tau of f of x in f of x. It's the same as f tau of f of x plus 1 in x. I've just added 1 x to x at that. You agree? Deal, Dora? Yeah, I take this iteration. I take f 27th iterate in f of x. That's the same thing as saying 28th iterate in x. I just add 1 x to iterate. Then I write this as g minus tau of x plus 1 composed with h tilde, composed. Now because tau of f of x is equal to tau x minus 1, this is just f tau x of x. And this here, I write this as equal to g composed with g minus tau of x, composed with h tilde, composed with f tau x of x. And this is just the definition of h of x. So this is equal to g composed with h. So I know it looks a little bit as a magical calculation, but it just shows that it works. We haven't formally shown that h is a bijection. I will leave that as an exercise. So the definition of conjugacy says that there exists a bijection between the two sets that satisfies the conjugacy condition. If you think about it a little bit, it'll be clear that it's a bijection. I will leave it as an exercise. So exercise, h is a bijection. So this proves this lemma. OK, we will finish here now. Just a last comment. As I said at the beginning, but at the beginning was not so realistic. So there's really this, in general, this field is not, let's say, how should we say, the things we're doing are fairly elementary. But that doesn't mean they're not sophisticated at the same time. So there is various dangers in this. There's a danger on the one hand that you think, oh, this is simple. This is easy. I understand it. You have an intuition. And then you don't really study. And you don't realize that there are some subtleties there. So if you feel that everything is clear, you understand it all makes sense. Still make sure you do the exercise and you make sure because there are some subtleties in there. This is clear, always in mathematics. But there's another thing. If on the other hand the opposite, if you're getting a little bit lost, then it's not so difficult to catch up because there's nothing complicated. But the concepts might be unfamiliar to you. This concept of looking at orbits, of iterating, it's just a question of familiarity. So once you, again, if you do the exercises, you get used to what it means, iterating x, iterating f of x, taking different iterates, compositions, and so on, is all fairly elementary. It's not that difficult. But you must do the exercises and make sure that you think about this at home. Just in the lecture is not enough for you to really digest this stuff. Feel free to ask me any questions at any time. And I think this is enough for today. Thank you.