 All right. So I'd like to begin by addressing a question that someone asked after the last lecture. It's a very good question. The question was, if I interpret it correctly, was, first of all, where so far have we been using smoothness? And secondly, is smoothness really important? So first of all, what do I mean by smoothness? Well, if I say that a map is smooth, and I just use the adjective smooth, I mean that it's infinitely differentiable. We can actually talk about spaces of dynamical systems, the set of all C infinity self-maps. That's like a big space, right? And we can ask what properties are common in these spaces of dynamical systems. And that's a whole kind of branch of dynamics is looking at the general picture. If I look at the space, this crazy enormous space of all dynamical systems. So to answer the question, or to give some indication of why smoothness might matter, I'll just tell you about just a couple results. One goes back to 1939. One looks at the space of all volume-preserving homeomorphisms of a manifold, say the torus. Just do the torus. So let's look at all area-preserving homeomorphisms of the torus. And let's say two of them are close, if they're close in the uniform topology. So there's a topology on this space. And you can talk about large sets of homeomorphisms because you have the bare category theorem. And in that space, there's a very large set. It's a set of, I think it's called first category, or it is a countable intersection of open dense sets. That's sometimes called a residual set. There's a very large dense set of homeomorphisms that are ergodic. And even more than that. And that was proved by Oxtoby and Ulam in 1939. So the typical continuous dynamical system-preserving area is ergodic. If I have something that isn't, I can change it a little bit to get something that is. But these are only continuous objects. They have no differentiable structure. You can't take derivatives. If you've seen, like, Vierstrasse's curve, that's nowhere differentiable. That's kind of what they're looking like. They're really, really unsmooth. By contrast, if you go to C infinity and you look at the space of all C infinity, diffeomorphisms of the torus, those are just C infinity maps that have a C infinity inverse. And you look at those that preserve area, then there are open sets of diffeomorphisms that are not ergodic. So there are diffeomorphisms that fail to be ergodic. And you can't approximate them by something ergodic. So every homeomorphism, you can approximate easily by something that's ergodic. But there are C infinity diffeomorphisms that cannot be approximated by something ergodic. And that's because of a theory called KAM theory developed first by Kolmogorov in the 50s and then Arnold and Moser. And it uses very much a lot of derivatives, at least three derivatives. In fact, you could make it C3 on the torus. And the statement is still true, three times continuously differentiable. So whether a system has a property, even as simple as ergodicity, really depends in a delicate way. I should say, for C1 diffeomorphisms, preserving area, it's still an open question whether a typical system is ergodic. So if you just add one degree of differentiability, the picture changes, all the techniques for continuous systems go away. So we're focusing most of the systems we focus on today will be C2. We are going to need, today, this week, we are going to need at least two degrees of differentiability. What I'm going to do today is not talk very much about these ugly, smooth systems. I mean, you might, from what I'm saying, smooth systems do not sound ugly at all. They sound very smooth. But in contrast to the linear examples that we're considering, these F sub A's, these rotations, in contrast to those, a general smooth system has some very complicated features that you can't analyze using Fourier series. So for a very simple example, well, Kanim gave one at the very end of the lecture that he said was ergodic. But that's not, he did not prove that. Hopefully I might give some idea of why that system is ergodic. We might give you some idea by the end of the week. But here's another system. If I look at the system f of x, y equals 2x plus y plus epsilon sine 2 pi x plus y. Sine is sort of arbitrary. I chose it to be periodic, x plus y. This is a map from the torus to itself. It preserves Lebesgue measure. So epsilon is some positive fixed number, non-zero fixed number, but it's small. Let's say it's small. Let's call it f epsilon. So when epsilon is 0, this is just f sub a or f sub 2, 1, 1, 1. This is just a map we've analyzed so far. We know it's ergodic. When epsilon is not equal 0, if I wrote this correctly, this f sub epsilon still preserves Lebesgue. Y, because f sub epsilon equals a map that I'll call Psi epsilon, composed with f sub a, where Psi, I don't want to use Psi, H, where H epsilon of x, y is x plus epsilon sine 2 pi y comma y. This preserves volume. And this preserves volume. The reason this preserves volume, well, how can we tell? Let me just do a little smooth here, because I think you guys are craving it. Well, let's write down the derivative of H at a point x, y, or H epsilon. So by the way, I'm totally going off script here. But d epsilon H of x, y, which is a map from the tangent space to the two torus. But if you like, we can just think of it as a matrix. But formally, it is such a thing. But written as a matrix with respect to the standard basis of R2, dH epsilon, the derivative, the total derivative, the Jacobian matrix, is of the following form. Well, differentiating with respect to x, I get 1. Differentiating here, I get epsilon 2 pi cosine y, 2 pi y, 0, 1. And you'll notice that the derivative determinant of this is 1. And that's the infinitesimal distortion of area is constant, it's 1. And so from calculus, from the change of variables formula, and I'll say more later, this implies that H epsilon preserves the bank. So here's a map, F epsilon, very close but nonlinear. Is it ergodic in a preserved area? Is it ergodic? I defy you to try to prove that using Fourier series. All right? It's just not going to happen. That's even though there's a sign there, it's not a Fourier series kind of thing you want to do. So what I'm going to show you, what we are going to show you this week are techniques that work for 2, 1, 1, 1. But they also work for systems like this, for proving our giddycy. So we're going to give you some techniques that will allow us to deal with nonlinear examples like this. It's an obvious question, right? Is it ergodic? I can even write. Question is, well, I don't actually know the question answered to this question. I don't know the answer to this. But I do know the answer as long as epsilon is sufficiently small. So there you go. Answer, yes, if epsilon is sufficiently small. Otherwise, I'm not sure. OK, so that was a little bit of looking forward. Why are we doing all these things? Believe me, all these disparate techniques will come together. And we will be able to prove some, I think, kind of interesting things using them. And in any case, I'm going to give you what I would call a toolbox of techniques that you can begin to use to analyze nonlinear dynamical systems preserving volume. OK, so there's an important principle that's probably going to occupy most of the rest of this lecture. Yes? No, sine is not significant. Sine could be any periodic function. Basically, f epsilon could be any perturbation of f that preserves volume and is smooth. It preserves area and is smooth. So sine is not significant. Yeah, how small does it have to be? I could write it down. I mean, it's not that small. It's not terribly small. No, I mean, it's not epsilon has to be 0.0. It's a pretty big epsilon. The epsilon's reasonable. And it has to do with the size of the eigenvalues of this matrix. It has to be on the order of the eigenvalues of this matrix. Too far from that, yeah. Other questions? So the main principle for today's lecture is that oil and water do not mix. It's a property of measurable sets of positive measure. So this is just some mnemonic for remembering that. So what do I mean by this? More precisely, this is a property of Lebesgue measure. Other measures share this property in different forms. It says the following. If I have a set of positive Lebesgue measure and I look really close, I zoom in on a point in that set, I will see almost entirely that set. I will barely see its complement. So it's a manifestation of the principle that Lebesgue measurable sets can be well approximated by open sets. Lebesgue measurable sets, for many purposes, you could almost think of them as open sets. Because if I look in a small enough ball around almost any point in a Lebesgue positive Lebesgue measurable set, I'm going to see just that set. So that's called the Lebesgue density theorem. And I can make it more precise. And in your exercises, I will have you prove a very kind of basic form. It's not a very hard theorem to prove, but I'll give you an even simpler version that you can prove with your bare hands. But here's the statement. OK, so let me describe, first of all, we've talked about conditional expectation of functions, but we haven't actually, at least I haven't talked about conditional measures of sets. So conditional measure is something much simpler than conditional expectation. Conditional measure, or another word used is density. Let A be inside of, well, we could stick to the torus, or we can look locally. So these could be subsets of Rn or Tn, or in fact, any manifold that has some kind of volume on it. We define, when we suppose that the measure of B positive, I'm better use mu. So mu is always going to be Lebesgue measure. And these aren't just in Rn. Let's make them Borel measurable sets in Rn. Then we define the conditional measure, mu of A bar B, is just defined to be the measure of the intersection divided by the measure of B. So from probability, you recognize this as if mu is a probability, this is the probability that an event A happens given that B has happened. Geometrically, the conditional measure, here's B. Maybe A looks something like this. Here's A. Oh, I just admonished adding for using A. X and B are two sets. Here's X. And the conditional measure, mu of X bar B, is just the proportion of B occupied by X. Now, you can think of mu of putting a dot here. You can think this itself is a probability measure. Just note that. So it's a probability measure that you obtain by just focusing on some little point. Now, we've seen the property. This is a completely side note, because really what I want to talk about is density points. But here's a side note slash remark. You saw the property of mixing. So suppose I have, using my notation, I have a space, which I'll call M. This is just some measure preserving system. And this is mixing. So we saw the definition of mixing, but here's another way of defining it. Really, it's just another way of thinking of it if, for all A and X, if I look at the conditional measure of F to the minus N of A inside of X, that actually just approaches the measure of A. So in this sense, mixing just means that if I look in any subset of the space and I let my sets kind of push around, so what is this? This is actually, if you like, this is the push forward. Excuse me? No, no, this includes the measure of X. It's in the denominator. So in other words, if I push forward N times this conditional measure, it converges to mu itself. That's a way of saying it. So every little space looks like every little part of the space from a measure. You could zoom in as much as you want in any part of the space. And when you push this forward, it starts to look more and more just like the regular Lebesgue measure itself. OK, so that's just a comment. I'm going to give an exercise, but I'm getting a little behind. I'm going to give an exercise just to clarify the connection between conditional measure and conditional expectation. But here's the key Lebesgue density theorem. If the measure of X, I don't care if the measure of X is positive because the statement is vacuous if it isn't. So if X is a beryl set inside of our N, or TN, then for almost every mu, almost every little x and big x, if I look at the density of X inside the ball around little x of radius r, this is just the Euclidean ball. In fact, it could be any ball, any family of balls where the shape is controlled as r goes to 0, the limit as r goes to 0 of this exists, and it actually equals. So around almost any point in X, if I zoom close enough, I see almost 100% of the set X. So this is a super powerful tool that we can use to prove vergedicity for systems that are not algebraic in nature. I would like to remark that there are lecture notes from last week. And I understand that not all of the lectures were covered, but there are nonetheless notes where a lot of these techniques are actually discussed, so proving ergedicity using density points, the Lebesgue density theorem, and so on. So there is, and using Fourier series to prove ergedicity. So there's quite a bit of material if this is unfamiliar. So that's the Lebesgue density theorem. And let's, we can call dp of X, set of X, X and X, such that this holds. Well, the set of X, let's actually make it the set of all X in my space, such that star holds. So you can have a density point that's not actually in the set. How much you have that? What would be an example? I could take like an open set and remove a point, and then that's a set. And the point I removed is a density point, but it's not in the set. OK, and then these are called the density points of X. And I just proved to you that the measure that almost every point in X is a density point. So if I look at X and I subtract off the density points of X, that has measure 0 from what I just claimed. That's the theorem. But also, if I'm a density point, almost every density point has to belong to X. Why is that true? I'll give you a hint. Look at the complement of X. But yeah, it would not be disjoint. Not necessarily disjoint, but close. So if it's not a density point, well, maybe, if I have a density point that's not in X, look at the complement of X. If the complement of X has measure 0, then that statement is vacuous. Because then there are no density points that are in measure. But the complement, if it has positive measure, then it has exactly the same property. Almost every point of the complement is a density point. For the complement. So I can't have, if this has positive, or even this, which is a subset of the complement, if this had positive measure, then I could find a point where it's almost 100% this, but not this. But this is contained in this. I don't see enlightened faces. I could draw some pictures. So maybe here's X. And maybe here's a point, little x, but it's a density point of X. But it's not in X. Well, let's assume, let's assume, OK, I'm going to just prove it, damn it. Suppose this is 0, but it is positive. Then the measure, then almost every, I won't do iterated density, but almost every x in this set is a density point for the set. Again, by the Lebesgue density theorem. Well, but now we have a contradiction. Because on the one hand, take such an x, so take such an x, and take r very small. If r is really small, then inside this neighborhood, since x is a density point of x, that means that 90% of this ball lies in x, because this r gets really small. But on the other hand, since x is a density point of dpx minus x, that means that 90%, if r is small enough, lies in dpx minus x. In particular, it lies in the complement of x, contradiction. In m minus x or whatever, rn minus x, contradiction. So that's an important picture to keep in mind. So if I take a set, its density points, the symmetric difference between x and the set of density points has measure 0. That's a key property. Let's use that to prove ergodicity of something. Something that's not algebraic. Well, something that seems not algebraic in any case. So using density points to prove ergodicity. So example one. So let m, let's let f from the n-tourist to itself. I'm trying to avoid using general manifolds in the lectures. So let's just suppose I have a map from the n-dimensional torus to itself with the following two properties. And then we'll just focus on an example. So the first property is that the Euclidean distance. So we have a distance here. Let's call it d. d equals distance, Euclidean distance on Tn. So that's the shortest line segment connecting two points. The length of that is the distance. So two properties. The first is that the distance between f of x and f of y equals the distance between x and y for all x and y in Tn. So this is what's known as anisometry. And the second property is that it's transitive. And I will give a different definition, but it's equivalent to what Khadim gave. So f is transitive, meaning that there exists some point x not in Tn, such that if I look at all iterates, I could do n greater than or equal to 0 or just n. There's some point that should be x not whose orbit is dense. Example, n equals 1. An example when n equals 1. What's a transformation? Yeah, an irrational rotation. Irrational rotation, right? Because it preserves distance, preserves two points. Their distance doesn't change. And did you prove this yet? I would assume. I mean, it's ergodic. And we just saw that ergodic implies transitive, sort of. We saw ergodic implies some form of transitive. Anyway, have we done the exercise that this has a dense orbit? Yes, OK. All right, so this is an example. So I'm going to give you a kind of silly general proof that this is ergodic, seem silly, using density points. So continue, da, claim. I should say, well, this property actually implies that f implies it preserves Lebesgue measure. So that's an exercise because it takes balls to balls. Same shape, right? So I have the same measure, so usual thing. That's the first. And then the second is that f is ergodic. Proof. Suppose there exists a set that's invariant under f. Measure of f of x is neither 0 nor 1. It lies in the open interval, OK? So then the measure of its complement is also positive. This also has positive measure. In particular, it has positive measure and not measure 1. Let's call this x prime for simplicity. So we want to derive a contradiction. So let's draw x. Maybe we have two different colors here. Very hard to see colors. There's x, and there's x prime. And note, since x is invariant, its complement is also invariant. So we have two invariant sets, OK? So if I take a point and I apply f to that point, I can make that point move everywhere. And these two stay stationary. These two sets are fixed. Well, we have a point. So here's x, and here's x prime. Now we have a point, x naught, whose orbit is dense. Well, that means that I can take any point here and get as close as I want by iterating long enough. And similarly, over here. So let's take a density point. So let's take x inside of dpx. And let's take a ball, r. Well, let's also take a point x prime. That's a density point for x prime. And let's take two incredibly small balls, but I'm not going to draw them incredibly small because then you won't be able to see. So just imagine these are incredibly small balls. So if I pick r small enough, I can arrange so that this ball is 90% x prime. OK, by which I mean, i.e., the conditional measure of x prime inside the ball around x prime of radius r is greater, let's say, greater than 0.9. And let's do the same thing over here with the blue. So we pick r so that this is 90% x, density point. Now, little x0, I can get as close as I want to x and as close as I want to x prime. So let's take an iterate of x0, where the distance between f, let's say, f. So there exists two constants, m1, let's say, less than 0 and m2. That's kind of not so important, but we just want m1 less than m2 such that the distance from f to the m1 of x0 to x is incredibly less than r. And the distance from f to the m2 of x0 to x prime is also way less than r, maybe less than r over a million, if you want. And now consider to the m2 minus m1 what it does to the ball around x, that density point for x of radius r. Well, f is an isometry. So this is the ball around f to the m, whoops, that's why people had a face, m2 minus m1, f to the m2 minus m1 of xr. And this is unbelievably close to the ball around x prime of radius r because of what I just said, it's very close to this, close. All right, so that's like the triangle inequality. And in fact, we could have chosen m0 and m1 to be, so that those are as close as we want. And then because x and x prime are invariant, the density, what is f to the m2 minus m1 applied to the intersection of the ball, I'm sorry, of, yes, the intersection of x with the ball around this density point. Well, it's x because of the invariance, it's x intersected with f to the m2 minus m1 of this ball, which I said is incredibly close, like the centers are just slightly off, incredibly close to the intersection of x with the ball around x prime of radius r. Finally, what is the measure of f to the m2 minus m1 of this intersection? Well, this preserves measure. So this is just the measure of the intersection. And this is the same. Well, it's the same as the measure of that intersection, which is, we just said, this is much bigger than 0.9 times the measure of the ball around x of radius r because of the conditional, because it's a density point. So I'm definitely finishing on this proof. And that equals 0.9 times the measure of the ball x prime of radius r, right? They're just both balls. They have the same measure. All right, so let's put this all together. So the measure, on the one hand, the measure, let's look at what is the density of x in the ball around x prime of radius r? Well, measure of the intersection. So it's the measure of that intersection. But this is incredibly close to the measure of this intersection. This is incredibly close to the measure of x intersect the ball at x, x prime, but they're the same, right? Because this ball is incredibly close. This intersection is incredibly, or this ball is incredibly close. If I apply f to the m minus n, it's incredibly close to the image. The image is incredibly close to the ball at x prime. Well, and this is just the density of x in the ball of radius. I'm pretty sure I kind of said a little more than I needed to. And this is greater than 0.9. So we have a contradiction, but this contradicts the fact that the density of x prime, which is the complement, is also greater than 0.9. The picture proof is easier than whatever I said. I should have just stuck with the picture. I have a ball. It's mostly x. I have another ball. It's mostly x prime. Because of transitivity, I can move this ball, the center of this ball, under iteration as close as I want to the center of this ball. In the process, I don't change the shape of the ball. I don't change the measure of the intersection. And I don't change this set because it's invariant. And so I can take this whole picture with 90% pink and move it on top of 90% orange as close as I want. And that's a contradiction. All right, so I just want to emphasize the two things I proved, or two things I used, which I just said in words. Because now we're going to, or in the next lecture, we're going to try to relax some of these conditions. So that's the end of the proof, but remark. Marks, we used two key properties. This might not be clear. But the first one is that because f was an isometry, f to the n reserves both the size and the shape of these balls. So if I took a ball over here, and I iterate until the cows come home, because it might take a long time. Because transitivity, it doesn't say I can do it in 10 steps. It might take forever, OK? Not quite forever, but practically so. But this stays a ball throughout. And so that denominator, that set that I'm taking the density in that I need to use for the density theorem, that set is not distorted. Imagine that's incredibly special and is not going to work in general. And the second is that f preserves density. In the following sense, measure, or the conditional measure, of f to the n of some set x inside of f to the n of some set y is the same as the conditional measure of x given y. So we use that as well. As an exercise, but this is really what I was going to talk about in the end of this lecture. But I don't have time. What happens if you try to do the same proof with f of x equals 2x? What happens if you try? Think about that. I have an advanced exercise. So it looks like we did something marvelously general. We proved that if we have a transitive isometry, then it's ergodic. And that seems like, wow, we've done something that you can't do with Fourier series. But in fact, this takes a lot of advanced thinking, but some of you might know the answer, be able to figure this out with a lot of help. If I have a compact manifold, Romanian manifold, and I have a transitive isometry, then the manifold's a torus, and the isometry is a rotation. So I haven't done anything more general than what we've essentially already seen. But it was illustrative. Are there any questions before we wrap up? Yeah, just use metric spaces. Oh, but you mean in my question? Well, it's an isometry of a Romanian manifold, which automatically makes it very nice. If you know about Romanian manifolds, an isometry is automatically the set of isometries formally group. There's a finite dimensional smooth. They're nice. So yeah, you don't even need. It just has to be a manifold. No, no, they're compact metric spaces like the adding machine, where you have a transitive isometry, where it's not a manifold. There are other compact groups on which you have dense, you have transitive isometries, like the adding machine, fiatic. It's also ergodic. Yeah, yeah, the proof doesn't. This proof is not the proof of that, though. The density point is that a density point is measurable. It differs from a measurable set by a zero set. So it's borrel measurable. Is that the issue? Or it's not a, yeah. So all the sets we consider, we're allowed to change by a zero set. So while I say borrel sets, I mean borrel. I really mean in some sense, Lebesgue sets. But I mean I allow equivalents up to sets of measures zero. It is true for Lebesgue measurable sets, yeah. Oh, yeah, I should have said that. It doesn't have to just be borrel. That's, I mean, that's essentially an exercise. Well, I guess you have to show that that set is measurable. But the proof is pretty clearly shows that dense sequence are measurable. Yes, we should be doing more questions. So I assume you have an announcement. Exactly, yes. OK, so thank you, Amy. Thanks. So just a quick announcement. As I mentioned previously, I would like during this week to have some meetings with some subgroups of you just for no particular reason, just to help you to answer any questions that you specifically might have also related to your country and your region and possibilities with ICDP. And so I would like today after at the coffee break to meet with two groups. First of all, I'd like to meet initially all the participants from North African countries, which I think is Algeria, Egypt, and Tunisia. I think we have. So if we could meet basically at the coffee break at one of the tables outside on the terrace. And then after that, say at 4.30, let's say, I would like to meet with all the students who are interested, who may be interested in applying for the ICDP diploma program next year. I've spoken to quite a few of you. If you remember, this is a one year pre-PhD program for students that are with a bachelor's or a master's or maybe have just begun the PhD. But it's really a kind of preparatory program to apply for PhDs. I hope it will not be too many of you. I certainly will not be able to guarantee anyone's admission, but I want to have an opportunity to find out who might be interested for next year, which means applying now for September soon, for September 2019, and just to clarify any questions that you might have about that. So for the people interested in the diploma program, we'll meet at 4.30, also in the terrace after the coffee break. And for the North African students, shortly earlier, like a little after 4 during the coffee break, also on the terrace. OK, thank you.