 I'd like to thank the organisers for this great conference and also for the opportunity to come back to this wonderful city. I think it might be the eighth time I've been here. I should get back more often. So I want to talk about partial hyperbolicity and so I should probably begin by explaining what partial hyperbolicity is going to mean in this talk because there are multiple ideas out there of what partial hyperbolicity ought to be. So I'm going to talk about three bundles of partial hyperbolicity. So M is going to be a compact manifold, F will be a defiomophism from M to M, it will be Ck. For the main part of the talk K will be equal to 1 but for some of the considerations at the beginning K needs to be bigger. So K greater than or equal to 1 and then there is a splitting of the unit tangent bundle. The splitting is invariant under the action of the derivative and there are three bundles in this invariant splitting. Although conceivably this middle one could be the trivial bundle and the idea is that this bundle is uniformly contracted, this bundle is uniformly expanded and this bundle in the middle is dominated by the outer bundles and this domination is point wise. So if you pick a suitable so in the looking in the background I will always have some Romanian metric and this Romanian metric is going to be adapted to the behavior of this splitting. So if you measure lengths of vectors using this metric all vectors in this bundle will be expanded by less than all vectors in that bundle. So if I look at one point all vectors in ES at the point will be expanded by less than all vectors in the central bundle at the point which in term will be expanded by less than all vectors in the unstable bundle at the point. But if I move to a different point you might find that the central vectors at that other point are getting expanded by more than the unstable vectors somewhere else. So the domination is point wise but the contraction and expansion here is uniform. So that's the set up for the partial hyperbolicity I want to talk about. So there are some examples of this. One of the examples is the loss of morphisms and maybe even before I talk about examples I want to say something about the structure that comes from this. First of all as well as using SC and EU as labels for the bundles I also want to think of them as being the dimensions of the bundles. So in particular this bundle is C dimensional and in this talk later on C is going to tend to be small. Another thing to remark is that you have stable and unstable falliations tangent to the corresponding bundles. These are nice falliations in particular they're absolutely continuous. I do not know about a center falliation. There might be one there might not be. Even if there is one it might not be absolutely continuous. So in place of a center falliation I'm just going to look at some disc. This will just be a disc basically it needs to be a transverse to ES plus EU. So sufficiently central. I don't need to be too precise about what these things are. And another thing to say is that this structure is robust under C1 perturbation and if I make C1 perturbations the splitting varies continuously and the leaves of these falliations vary continuously. So I'll say robust let's just say C1 robust. Okay so now there are examples. So there are lots of defiant morphisms. That's the case where C is as small as it will ever get it's equal to zero. This is the classic example of these things. Then you could have something like the time one map of an anosol flow. For example of the geodesic flow of a surface of constant negative curvature. Another example would be some kind of skew product with an anosol base map. And the idea is you would make what you do when the fibres sort of mild enough that it is dominated by the hyperbolicity in the base anosol of map. And here's a final example. It's a four by four matrix which acts on four dimensional torus. And I need to copy because I have to get the right numbers in the matrix. Here is our, whoops, that's a one, that's an eight. I think this matrix was written down by Peter Walters and this matrix and its perturbations were studied by Federico Rodriguez Hertz in his PhD thesis. As you can tell this is the companion matrix for a certain characteristic polynomial. The characteristic polynomial is chosen so that the spectrum of the matrix looks like two conjugate eigenvalues on the unit circle, one eigenvalue inside, one eigenvalue outside. And the conjugate numbers on the unit circle are not roots of unity. So the numbers have to be somewhat carefully chosen to make sure you get that picture for the four eigenvalues. And I think it's nice there, kind of arbitrary. And I think it's nice to get to terminal one as well. Okay, so those are some examples. And now the sort of original result about these kind of things is the theorem of anosol that volume preserving anosol of defiamorphisms are gothic with respect to volume. At least say C2 volume preserving anosol of defiamorphisms. And then the idea was that the sort of intuition that makes, tells you why anosol of defiamorphisms ought to be a gothic carries over fairly well to this situation. So you would hope that ergodicity is pretty typical for these kind of examples. And the first time I came to Montevideo was in 1995. There was a conference here. Well, she's not here. It was actually held over at the headquarters of the National Bank on the main streets in downtown. And Mike Shue gave a lecture about these kind of defiamorphisms. And in that lecture, he made some conjectures. So I think you could say these are conjectures of Pew and Shue. And I think that lecture here in Montevideo was really the first public erring of these conjectures. I actually asked Charles there when I saw him in China. He said that it was the first sort of the first public discussion of them. So I guess you could say conjecture zero is that C2 volume preserving, whoops, PhDs are partially hyperbolic defiamorphisms. And these should be typically ergodic. So you would hope that this would be C2, this ergodicity would be C2 dense. And you'd also hope that you'd have a C2 dense set of guys where it was C1 open. So actually, let me put in the word stably. For some reason, it's traditional to talk about robust transitivity and stable ergodicity. The stable and the robust there are synonymous. So I'll stick with tradition and talk about stable ergodicity. And then this conjecture was broken down into two parts. Part one is that C2 volume preserving partially hyperbolic defiamorphisms should be ergodic if they have a property. And the property that you need is essential accessibility. So let me explain what essentially accessible means. First of all, I'll explain what accessible means. There is a notion of a US path. Or if you think that's sort of North American imperialism, you can call it an SU path. You start somewhere. You go to the next point. And at this point, you went somewhere maybe. So this point is in the, say, stable leaf of maybe it was x0. I started out. Then I go to x2, which would be in the unstable leaf of x1. And then I move along in the stable leaf again. I could do something unstable. I could keep going like that. I've drawn the picture which makes it look as though the stable and the unstable leaves are both one-dimensional. That picture is somewhat misleading. Probably a more accurate picture would be something like that, where at least one type of leaf is at least two-dimensional. And so what really matters is where the corner points are. The precise path, well, when the leaves are one-dimensional, you don't have a lot of choice about the path. But otherwise, it's just the fact that they're in the same leaf. OK, and when you try to prove ergodicity, you realize very quickly that it's extremely helpful if you can go anywhere you like along US paths. And just imagine for the moment that you wanted to prove that a continuous invariant function was invariant. And that's much, much weaker than what you need to prove. But if you had a continuous invariant function and you had two points in the same stable leaf, you could just start iterating those two points and they would get closer and closer and closer. And the invariant function would stay constant because it's invariant. And as these points get closer and closer, the values of the function at the two points are getting closer and closer. Well, actually, they're not getting closer and closer because they're constant. They just have to be close. When if you push this far enough, they have to be equal. So a continuous invariant function is going to be constant on stable leaves. And if you run that argument backwards in time, the continuous invariant function is constant on unstable leaves. So if you can go anywhere you like along US parts, you can prove that continuous invariant functions are constant. And with considerably more work, you can make similar arguments for measurable invariant functions. And it's extremely helpful to be able to go places on US parts. So there is accessibility, whoops, yes, accessibility means any two points joined by a US path. And essential accessibility is the kind of notion that you have this up to measure 0. And Puyangshu actually gave two different definitions of essential accessibility in two different papers. There was an earlier definition, which is not as good. It was maybe more straightforward. And there's the later definition, which is the one you really want. So I'll tell you what the later one is. But I used to argue every time I met Yashapisi, he'd read the earlier paper. I'd read the later paper. He would tell me one definition. I'd tell him, no, no, Puyangshu said this. We were both right. But so you can define an equivalence relation, like going along US paths defines an equivalence relation. And essential accessibility means that this relation should be a gothic. So if you take a set that's a union of equivalence classes for this equivalence relation, a measurable set like that, then this measurable set should have measure 0 or full measure. That's what essential accessibility is. And that turns out to be what you actually need at the end of the argument that is going to prove ergodicity. So that's why they put essentially accessible in the conjecture. And if you go and look at this example, the stable and unstable falliations have linear one-dimensional leaves. ES plus EU is a plane. You've got this plane and the torus. And if you go along stable and unstable paths, you're trapped in that plane. There is no way on Earth that this thing is accessible. But the plane that the guys are in is a plane with the irrational slope or whatever planes have in four dimensions. And because of that, it's easy to see that you do have essential accessibility here. So essential accessibility is actually essential if you want to talk about examples like this one. OK, so that's one of their conjectures. The other one is that. Actual accessibility is, let's say, C1 robust. Accessibility is CK dense for CK, partially hyperbolic diffeomorphisms. OK, so they were the conjectures. And then if you put the two of them together, you would get conjecture zero. So let me give a very brief report on progress towards proving these conjectures. This is going to be extremely summary. So I'm forced to dismiss all kinds of things. So for conjecture one, there was initial paper by Grayson. Well, there was early work by people like Brinn and Pessian, where the behavior in the center was extremely nice. Grayson, Pew and Shub in the early to mid, nearly 90s, came up with a breakthrough where they were able to study perturbations of the time one map of the geodesic flow for a surface of constant negative curvature. So Brinn and Pessian could understand the unperturbed thing. They worked out how to deal with perturbations. And that really motivated them moving towards these conjectures. And then there's a whole series of papers, which I will skip. There's a final paper that Amy Wilkinson and I wrote. And what you have is conjecture one under one extra assumption. The assumption is called center bunion. I won't bother defining it because I'm not going to use it at all today. All I will say about it is that if you have this hold automatically, if you have c equals 1, if the center bundle is one dimension, that's about the best that's known towards conjecture one. For conjecture two, if you have one dimensional center, it's true. Everything you could wish for is true there. And this was proved, I will just write, h2u. This acronym stands for Hannah Rodriguez-Hertz Federico Rodriguez-Hertz and Raul Urus. This is a Uruguayan theorem. And then there's another paper by the Uruguayan threesome. I was there and Anna Talitzkaya. And in the course of these two papers, this paper was for the case of volume preserving ones. This was in non-volume preserving. So with c equals 1, it's true. And that's about it. No density for even c equals 2. Mike Shube in that lecture in 1995 did offer the following wonderfully helpful remark. He said, with this conjecture here, what you're trying to prove, the obstruction to accessibility is sort of stable and unstable being integrable. And so what you're trying to prove is that you cannot robustly have things joining up. That ought to be easier to prove than what you need to prove in the closing lemma, where you're trying to get something to close up. So he offered the thought that this conjecture ought to be easier than the ck closing lemma for k bigger than 1. You can feel encouraged by that. Right. Oh, and there's one more result I should mention. It's related to this, but maybe this must be related to conjecture 0. There is a paper by Avila Crovisie and Wilkinson. And they prove, basically, they prove conjecture 1 and the c1 topology. So stable, ergodicity, let's say, is c1 dense for c2, partially hyperbolic diffeomorphisms. You would really like to do better than c1, but good luck with that. And this uses a different mechanism for creating ergodicity than this program of the two conjectures. They talk about, I think, things called superblenders. I refer all questions about them to Sylvain. I'm not even entirely comfortable with ordinary blenders. OK, right. Accessibility is really good for you, but you need these superblenders as well. And that's not the subject of today's talk. So there is progress. And I think we're kind of stuck with everything now. So when you can't do what you really want to do, you start focusing on what you can do. So you could start thinking about, well, here we were asking not just for accessibility being dense. We really wanted robust accessibility to be dense. And this conjecture is formulated in a slightly awkward way. What you might really have liked them to say was that accessibility should be ck dense and c1 open. So there's a question that you can ask. Is accessibility c1 open, let's say, just for PhDs? Or I could say for c1, partially hyperbolic diffusorisms. So you could also ask the same question about essential accessibility. If you ask it about essential accessibility, the answer is no. So for essential accessibility, the answer is no. There's an example of Bryn that shows that the answer is no. The idea is you have some kind of skew product over maybe over 2111. You have two-dimensional tori as the fibers. The essential accessibility goes by that you can move inside the tori along some irrational line. That's when you do get essential accessibility. And then you can change the direction that you're moving into a rational direction with a small perturbation. And then you don't have accessibility anymore. So there's a nice example of Bryn that says you don't ask this question about essential accessibility. But so for accessibility, you can ask it. It's a reasonable question. And the answer goes, well, what's known goes as follows. For C equals 1, so now I'm talking about accessibility. C equals 1, the answer is yes. This is a result of DDA. C equals 2, the answer is yes. It's a result of Avila and Viana. C equals 3. I think the answer is yes. And my partner's in thinking of this, Hannah Rodriguez-Hertz and Raoul Warris. So we talked about this a few weeks ago while I was in Xinjiang. The new Montevideo in the Northern Hemisphere. So this argument is new. I think it can be called work in progress. We certainly have an idea. Maybe there are some details to worry about. So I'll put yes tentatively. And so this is Hannah Rodriguez-Hertz and Raoul Warris. And then you get to dimension four. I don't, nobody knows. There's a clear pattern. Obviously, a group of four people should think about C equals 4. I don't know what would happen. The answer might be no. Because the nature of these arguments is they all exploit low dimensionality of the center, of the center disks. And so as the low dimension becomes less low, the ways in which you exploit the low dimensionality become increasingly desperate. So I have no idea what you might do in C here. And then if you wanted to get to any dimension, you're going to have to think of something, I think, completely new that we don't have yet. And maybe it's just raw. Maybe this is a low dimensional phenomenon. And maybe low dimensionality ends here. Those are all open questions. OK, so I want to, in the rest of the talk, give some sketch of how Avila and Viana do it. And then what we do is a modification of what Avila and Viana do. So let me say first some fairly general things about what you want to do to get openness of accessibility. And then I'll try to give some details. And then I will undoubtedly run out of time before anything too awkward comes up. So let's see what I can do. So the first observation is that if I want to prove accessibility, so I have some point x0. I have some diffeomorphism f0. And there maybe is a central disk for x0. And what I would like to do is find some open set u contained in here. And I want to know that u is contained in the accessibility class for x0. This accessibility class, of course, depends on f0. And I don't just want to know that. That's not good enough. This has to be in some robust y. What that means is that if I have x and f near x0, f0, that you still have something like, yeah, you still have u contained in the f accessibility class for x. But if you can create this picture robustly, the robustness is crucial, then you're done. Because then if I take f close to f0, if f0 was accessible, f is close to being accessible. You can take the US paths that demonstrated accessibility for this guy. You perturb them a bit. You don't necessarily get to where you want to go, but you get to within epsilon of where you want to go. So if close enough to f0 will be epsilon accessible for any epsilon. And so if I can access this u in DC, coming out of you, I can do one stable and one unstable wig. And that's going to give me some maybe u hat, which is an actual neighborhood in open city in England. And it's an open set with some size. And then if I start anywhere, I can do a US path, get to within epsilon of anywhere I want to go, so I can get into this set. And then everything in there can be accessed from one place. So you get accessibility. So the issue is to be able to access some open subset of a central disk in a robust y. So the simplest scheme for doing that is the one that DDA used in dimension one. Let me draw the basic picture. I have a point x0. I have a center disk, which is a center disk. It doesn't really matter what it is, as long as it's transverse to stable and unstable. And it's one dimensional, because c equals 1. So you start, you do s, you do u, you do s, you do u. This thing tends to be called the Brinn quadrilateral. Well, the chances are, unless you're unlucky, you're going to come back somewhere down the fiber. And then this is a small local picture. So then you can use shorter paths. And you can contract this Brinn quadrilateral down to the quadrilateral of length 0 that doesn't go anywhere. So there you have an interval. There's your open set in the disk. That way of accessing it is robust. This is bone proof. Perturb everything a little bit. You've still got a picture like that. So the only way this can possibly go wrong is that there is absolutely nowhere, anywhere in the manifold, where you can draw a picture like this, everywhere you would have to have a Brinn quadrilateral close up, every single one of them. Well, if they all close up, you don't have accessibility. So that's DDA. Life is a lot less simple when you get to two-dimensional center. So how does Avila and Viana do it? Well, you could think of a picture. You could think of something like the following picture. I have a 0. I have a center disk forward. And now imagine that I think I want to think about some space that I'll call US of X. This is US paths starting at X. And I might from time to time want to put various pieces of decoration on this. I might want to talk about that. These would be US paths for X, starting from X. And the delta would mean that each leg has length at most delta. OK, so you've got this space. And you can start to be able to imagine that there's a topology on it. It's not even to be able to work out what the topology is. Your US path has a sequence of corner points. Nearby paths have nearby corner points. So you would like to imagine it. And then you have some end point map, E from US X to the manifold. So you would like to imagine that you could find a continuous path where you'd like to be able to draw a picture like that with Y. Some people prefer the letter X. X and Y are good. The letter I is not good. Three legs is the minimum you need. If you make a picture like that, robustly, of course. So the idea is that these things should be end point. These should be the images. You should have something in US X. I guess something looks like Y inside here. And it's being mapped by E to a picture like that in some nice, robust way. So then if I was at a nearby point X, I could get the same kind of path. And because of the shape of Y, this Y has to intersect that one. And then you can see that you're capturing some neighborhood of X0 in the center leaf that will have to be accessible from X0. So that's the sort of picture they want to create. And if you just draw the Y you want, well, if you are very lucky, you might be able to realize your Y as an end point map of something. But they don't do that. What they prove is they have an approximation theorem. And I won't try to write down the statement of it. I will just draw a picture of what it sees or at least what it means for us. So I have some center disk. I have a curve in it. I have US of some, say, X0. And then I want to build it. I would really like to realize this curve exactly as an end point curve for a curve in here. I can't do that. What Avila and Viana can do is pick some suitably close points along this curve. And then they pick some epsilon that is what needs to be positive. Otherwise, it's too difficult. But the point is epsilon can be small. That's why it's epsilon. And then I can get from the first point to the next point with an end point path. I know nothing about this end point. They don't tell you anything about this end point path other than where it begins, where it ends, and that it doesn't go more than epsilon away from where you'd like it to be. This could be a space filling curve if this had space. Maybe it's just area filling in my picture. And then you have another sort of disgusting. So you just have this thing that looks something like that. Anyway, it tracks along. It tracks along the path you want it to follow. And it stays in some epsilon neighborhood of it. So in this picture here, they can't necessarily build Y. And you can start off exactly where you want to start off. So they can build something that does that. And then you can move over to a nearby point and do the same sort of stuff there. And these green things still have to intersect. So life is good. And so the key thing in Avila and Viana is the approximation theory. OK, so then you would like to move up to dimension three. So let me describe the sort of idea that we have in mind for dimension three. OK, one of the things Avila and Viana do not offer, at least not yet is, so they can approximate curves. It would be wonderful if you could draw a picture of a surface that you like and approximate the whole surface within epsilon. That is not on offer, at least not even from Avila and Viana. So what we want to do, so let me call it our idea. What we would like to do is build something that looks, we would like to take some closed curve. This should be all in, well, this is in some center leaf, which is now three-dimensional. So I guess it's just this whole region of the blackboard. So we have a closed curve over which we have fairly reasonable control, enough control. And then we want to build, we want to have, so we have our USX and we have our in-point map. What we want in here is a closed curve. And we want the in-point map for this closed curve to look kind of like that. It's a closed loop because this was a closed curve. And it tracks around approximating the boundary we wanted. Oh, and this should be a contractable thing. So then you can contract this. You will get a disk. We know essentially nothing. Well, the only thing we know about this disk is where its boundary is. And it's a disk. But that's sort of good enough because then you could do the same kind of thing with another disk, which you could do that. You could do the same approximating business for its boundary. And you could engineer it so that these green curves, they may be disgusting as curves. They may be space-filling. But you do know that they're linked and they bound disks. And then something's going to intersect with something else. And you're going to be able to do the same kind of stuff that you did with y. So the idea is to be able to build this kind of thing. That's our plan. So we have no control over how you fill in this surface. It could be erupting out of the black warden, heading over to the back wall. Who knows? It's just the boundary we hope to control. OK, so I've got, I think, about 10 minutes, maybe even a little more in which to try to sketch some of the ideas as to how you actually do this. So I will start with sort of what I feel at Viana. So I really want to try to describe how they prove the approximation theorem. And then I want to end with a brief sketch of how we think we can go about building the closed loop that we want. OK, so the first sort of step in the approximation business is to parametrize US paths in a convenient way. Sorry about that. Right, so we want to parametrize. So this will be US sub delta of some x. So I want to have fairly short legs. So I start off at x, which maybe is going to be x0. And then I move to, well, actually before I do that, I want to think about three exponential maps. And maybe together with these three types of geodesic. So the exponential map from a point x, it maps the tangent space to e. And you get geodesics for our Romanian metric g that go from x. Then there is a stable exponential map. This will map the stable bundle to the stable leaf. And this is the exponential map for the Romanian metric that's induced on here by the ambient Romanian metric. So you sort of have a WS, you have a geodesic inside there. And then there's a corresponding unstable exponential map from the unstable bundle to the unstable leaf. Right, so if I have my US path, maybe I start with, let's actually go to x1 half first. And then we'll go to x1. So we have two steps, one will be stable, one will be unstable. Then we've gotten to x1. So for the stable leg, we pick some stable vector. And we go along the unstable geodesic. We get to x1 half. And this has a length less than delta because of my subscript there. And then I pick, and this maybe was, this should be, I think Vs sub 0. I think I want to pick Vu sub 0 next. It has an unstable geodesic, which takes us to x1. Then it's time to pick Vs sub 1. Go to x3 over 2, where we will pick Vu sub 1. Follow, that was a stable geodesic. This one's unstable. We'll get to x2. And you keep going like that. So you can parametrize your US path with a sequence of vectors. So I suppose you have to know where you begin. And then you have sort of V0s, V0u, V1s, V1u, and so on. You get a sequence of vectors. And they belong in the vector spaces they're supposed to belong to, which is OK. But I don't really like it. This guy was in ES at x0. I like that. This guy, though, is in EU at x 1.5. And this is in ES at x1, and so on. I would like to have this guy be in EU at x0. I want everything to be in the spaces at the base point. And so what I need to do is, at the base point, what I can do is pick at x0, I want to pick an orthonormal basis for ES, and an orthonormal basis for E0. And then I want to transport that orthonormal basis with me as I go along. And then use the orthonormal basis. So essentially, I want to parallel translate this orthonormal basis as I go along. Or maybe what I should think of is I take the vectors here and parallel translate them back to the beginning. I think I prefer to move the basis forward, but like one of those huge machines that lays railroad track and sort of puts down the track, moves along on the track it just laid, puts down more track. So I could go along this. I could think of this just as a curve in the manifold, parallel translate using parallel translation for the Romanian metric. So that moves the two orthonormal bases to orthonormal bases here. Unfortunately, they're no longer bases for the ES and EU here. But you can do some kind of projection to the spaces you do want them to be bases for. And then after you've done that, after you've projected, they're probably not orthonormal bases anymore. But you can do Gram-Schmidt. So now you kind of parallel translate project and do Gram-Schmidt. So you've got your bases here. Then you do the whole business again. Parallel translate project, do Gram-Schmidt. And you can just keep moving it forward. So you can think of this sequence of vectors as all being in the stable and unstable bundles back at the beginning of the path. So if we have such a sequence, I'll just call it V with a bar on top. So you can parametrize paths that way. And now given such a path, if I have a fixed V bar and I start off from some x, I get some US path to y, I could then put in, I could think of a centered disk at x and a centered disk at y. Now suppose I start, maybe that was x0. I now move to a nearby point, x. Well, I can take my data, my orthonormal bases at x0. I can parallel translate them over to x. And then I can apply, I can use the same data, the same V to define a US path starting nearby. I probably won't end up in this centered disk, but I can do two more short legs, stable and unstable, that are uniquely determined that will get me into this disk. So maybe this was x prime. We ended up at y prime. And this thing defines a map from, or at least, some neighborhood of x to some neighborhood of y in the centered disk. And this map is a homeomorphism. So that's a sort of fundamental step in there proof. OK, the next big idea is that you want to use Bayer category. So this is all, I feel, a Vienna. So you want to have a Bayer category, I get it. So for some k, you have ES sub delta at x0. That's going to be guys of length at most delta cross EU sub delta at x0. You have this type of k. So you do two k legs all together in your US path. And you look at the image of all of this stuff under the endpoint map, you get the entire manifold. So I want to draw some symbolic picture of there's ES delta, there's EU delta, and then I have a whole lot more of them. Altogether, I have two k of these boxes. And now I want to chop each box up, maybe L by L. So I get a lot of little cubes. And then if I can pick one thing here, one thing there, one thing there, and so on. And so I think I get something like, I think I wrote it down hopefully correctly, I think it's L squared to the 2k little boxes. These little boxes are all closed sets. And when I take the union of the images under E, well, I get everything that I had before, so I get the whole of E. And so what I have is I have a union of closed sets and a finite number of them. That union has interior because it's the whole of the manifold. Bear tells you that one of those sets, at least one of them has interior. So I can find a little box, a very small sort of neighborhood of US parts. Then the image of that under the endpoint map has interior. So you can make a picture. You have some, I will just draw something a little symbolic. I have sort of, I have a small subset of must be ES x0 cross EU x0 to the k. I have the endpoint map. And I have some small neighborhood in E. And I know that this endpoint map is a sujection onto here. So if I have two points in here, they have pullbacks there. And now, if I try to pull, they may have, there may be multiple choices for these pullbacks. But there are two of them. Then I can draw the interval between those pullbacks over here. And then I can look at the image of this under E, which might look rather disgusting. But it begins where it began, where it has to begin. And it ends where it has to begin. And this was a really small piece of the parameter space. So this curve doesn't go too far away. So that allows you to build, if you have a path, you can sort of pick two nearby points on it. And this kind of scheme will allow you to build something that gets from the first point to the next point and just doesn't go too far away from either of the points. And so they iteratively do this kind of thing. And that's how they go about building the paths in the approximation theory. So that is a somewhat minimalistic sketch of what Avila and Viana do. And now I think I want to give an even more minimalist sketch of how we want to sort of extend what they do. So I think I'm going to draw a somewhat simple, well, let me see what I can draw and try to explain it. And then it will definitely be time to finish. So I think I want to start here. We could think of this as Chicago. And then I want to pick, this is an Avila and Viana approximation of something. And we go down here. And then I think I want to go up here a bit. You could think of this as Route 66. I actually have to take a slight detour north. And I want to arrive at Las Vegas. Great. And there's some huge string of parameters. So the base point for everything is there. So there's some massive string of parameters that describes the path, the final path of this. And you have some curve. You start off with parameter 0. You have maybe V of, let's say, T. And V of 1 is the vector V, which parameterizes the US path that gets you down to here. Then what you want to do is choose some path, some loop here. Oh, sorry. Before I do that, I need to think of, whoops. This is actually all a picture in, I don't really need that disk. This is all a picture in the center disk for x0. I'm drawing pictures of end point of the, looking in the background, it's going to be curves in parameter space. And what I'm drawing are the images of those curves under the end point map. But now we have this homomorphism, V sub V. And it's going to take some neighborhood of x0, some small neighborhood, and it will go to something there. If this neighborhood's small enough, this one is safely over there. I then take some, I built some small loop in here. This is going to be the path that really is doing the bounding of our disk for us. And then I want to do a Villaviana approximation of this red loop with extremely small epsilon. So you go around like that. So you start off from x0, you build up this path, then you extend your parameter path, and you get over to here. So I want to draw some kind of symbolic picture of what we're doing. Where's my symbolic picture, so I can copy it. I start off with parameter 0, and then I kind of grow some stuff. And in this first part of the picture, this corresponds to going around the green clue. And I have some path in parameter space that begins with parameter 0. And grows eventually to the parameter that corresponds to a US path that starts at x0, goes somewhere, comes back to x0. Then I keep this initial parameter stuff the same and add on parameters which sort of move the endpoint down to over here. Then what I want to do is to sort of retract what we've done here while keeping this part of the parameterization the same. So you'll get something like that. And what will happen with the endpoint map while I do that is it's going to stay over in this neighborhood. It's not going to come back and interfere here. And then finally, at the end, I just kill off, I copy this triangle sort of in reverse and pull that map back. So the idea is eventually you get a loop in parameter space. And the endpoint curve is supposed to start here, go around the green curve, track down here, do something, and then come back again and close up. That's the scheme. So the idea is that we have this over here. And this does not interfere with that. What happens in Vegas stays in Vegas. And you get, so you end up with some very small piece of reasonable boundary. And then it's stuff. And then you get this thing bounding a disk. But you can then put another thing in that kind of goes through that hole. And that's what we need to do. So let me finish here. Thank you. What is that in a second? Oh, well, the idea is let me draw something a little less realistic. So the idea would be I would want to build two disks whose boundaries are curves that are linked. And then what's that? Well, maybe these blue ones are the sort of where I like them. And then I sort of approximate those. But if I approximate them closely enough, they're still linked. And then you have to do this. You have to approximate. It's not enough to just have a loop. It's not enough for just the endpoint curve to have a loop. You have to have an actual loop and parameter space. And then you want to do the, oh, and then in parameter space, I make the disk by just sort of shrinking everything down. I guess I forgot to say that. So you then kind of suck all of this down to zero. And you get a thumb disk. And all I know about the disk is sort of where its boundary is. And I know that there's a piece of the boundary of the disk that looks more or less like the boundary of a disk. And then there's other stuff, which is out of the way. And so then I can build a second such picture and claim and force it to intersect. That is our idea. There are details that are yet to be written. The devil may be in the details.