 And so the title of the talk is Millen of Thurston Determinant and the Royal Transfer Operator. And in some sense, it's sort of a kind of ending parenthesis on a long piece of historical work, starting with Millen of Thurston, and passing by a lot of intermediate persons, in particular the Royal and Baladine Royal and other people. And the setup is, in some sense, one of the most trivial, dynamical systems you can imagine. So I'll put some small symbols like this saying that this is the first part, and then it'll be a second. Then I'll come back a little bit to some of this. So the setup is what you call 1D, piecewise monotonic map. Should put continuous patrols into the continuous monotonic map or monotonic map. And then when you have a piecewise continuous monotonic map, so what can you then do about it? Well, you can study how the dynamics act upon your space. And let me give you just a drawing of some example. So you have an interval, A, B. And then you have some cutting points or critical points. You have a dynamic, which is a map of defined partially, because I'm likely not defined at the endpoints. And in particular, so the map might not be continuous like this. So you have some critical points inside. And then you will also be interested in denoting these something. So here we have two internal critical points. And out of this, we have three intervals, E plus 1 intervals. And on each of these intervals, so let me call them something like I0, I1, I2. On each of these intervals, you have a map, F, IK. IK is an open interval, and then you have a map, which is strictly monotonic and continuous. So the map is partially defined because it's not defined at these critical points. I don't define it at A and B either. But apart from that, you can start looking at what is the dynamics going to do for the interval. So we have a first Z1. We have just the three intervals, the collection of three open intervals, which form a partition for the domain of the function, say initial partition. And then you start looking at, if you iterate the map, then you have to refine your partitions in order to be able to define them. So you look at the refined partitions. So it would be, you might write like Z1. And then you look at all intersections with pre-images of Z1, F minus N plus 1. So this will correspond to a refined partition. And then one of the, say, topological properties that you can associate with this dynamics is what happens with the number of partition elements as N goes to infinity. And it's not difficult to see that, for instance, it's very difficult. It's very easy to see that if you take the number in the N plus N's refinement, then it's not bigger than the number ZM times the number in ZN. So it means that if you take log, it's sub additive. So the topological entropy, which we may define as limit, 1 over N log number ZN, exists. So this is one of the first thing that Dynamics would consider studying in this context. And now the computing effort in calculating ZM is quite heavy. So you're computing effort for calculating ZN. Well, if you need to calculate ZN, what you do is that you need to keep track of all inverse images backward iterates of the critical points. So you see that when you look at the F squared, for instance, when it's defined, then you need to look at what happens when do you land on the critical point. So this needs to take care of all images of critical points. And they tend to grow in a number like ZN. So actually the computing effort is of the order of exponential of the number, which is N exponential of NH top. So this is just in a naive picture that you would like. If you want to calculate it, you would do like this. And it works fine in some cases. And in other cases, it works a little bit less. When you have a so-called finite partition, macro partition, then it's much easier in some sense. But I'll not talk about this. In the general case here, and that's sort of the part too, it is that Milner-Surston came up with some method for computing this topological entropy. And what it did was constructed a determinant. So I'll talk a little bit more about how you construct it actually a little bit later on. But for the moment being, let me just say it's a formal power series in some variable called T, say. And the formal power series dMt of T, mt for Milner-Surston, is expressed in terms of powers of T of course, but then of forward orbits of critical points. So this uses forward orbits of critical points. Now you would say to me that actually the forward orbit of critical points is not defined, because f was not defined at c1. But you have limits to the left and to the right. And what they do is that you look at what the left's limit point and right limit point are doing during the dynamics. And in particularly, you will only have up to order n. So if I look at computing effort, effort here is much less than in the first case, because it uses just the forward orbit. So if you want to get to order t to the n, say, in this determinant, the computing effort would be of the order simply of n. So instead of having a computing effort growing exponentially here, you just have a linear growth. And they would actually preside the same precision in the determination of the topological entropy. So now they have this determinant, but so what can you say about it? So their theorem, say it started in, so it's minus certain. And the first part of it actually started already in 77. But the proofs was published much later actually, only in, like, 88. And one of the reasons was that they had difficulties with a certain part of it. And then others, perhaps, they were a little bit less in publishing. But there are two parts of their theorem. So if I say that I take t star to be exponential of minus topological entropy, so I didn't state this. But obviously, this here is never negative, but it could be 0. So this here, actually, is just smaller than 1. And the first observation for the DMT is that DMT is not just a formal power series, but it actually becomes an analytic function in the unidisk. So it is analytic in the unidisk. So we have, like, d like this. So in the interior, it's an analytic function. So 0s, for instance, are particularly will be isolated points. And the first part of their theorem is that if t star is smaller than 1, so this quantity here, then it is actually a 0 of their determinant. And the second part is that if you take any number smaller than that, so if t is smaller than the t star, then the determinant is not 0. Now, the first part is proved by relating in a very ingenious way. So just say proved by relating forward and backward orbits. I should say in a very clever way, because it's non-trivial. But you can do it, say, if I talk about, it would be like half an hour to sketch it, but I won't do that. And I believe it was actually mostly Thurston who came up with this in the beginning. But then he didn't manage to see in the beginning why it was, say, the smallest 0 of the determinant. And then that's, I think, was where Milner came in. And they then added a proof, which was using what is called a theta function. Which essentially keeps track of periodic orbits. Now, you need to worry a little bit when you want to keep track of periodic orbit, because if the map is not expanding, then you might have a lot of them, more than you want. But you have to manipulate this in a slightly different way. But then they derive some magic identity, which is that if you take the Atin-Masur theta function here and then you multiply by their determinant, then it turns out that this has no 0s, no poles, 0s, no poles in the unit disk. So the structure of the 0s of the determinant, let's see, we have a unit disk like this. And then we have t star somewhere here, which is the leading, the smallest 0. And then in the circle here of that radius, you have no 0s in here, no other 0s. But you might have some 0s outside, which could actually like what we eventually would call resonances at all. So you might have some other 0s. And on the border of the circle, the function did not be defined. So you might have an, ah, sorry, there might be others. There could actually be others here. Yes. But then none inside. And this t star might actually even be a 0 of order bigger than 1. So it's not like in Vivian's talk that it was like simple eigenvalues or something like that. But the fact that you have no other 0s here means that, numerically, if you compute it, then you're sure you've got the right value. Now, I would call this a very indirect proof. And I think Milner-Sosten agrees with this, very indirect proof. And particularly, the proof actually uses the fact that for a particular map, you can verify it's true. And then you make continuous deformations of the map. And you verify that when you make the deformation, it remains true. So they obtain a little bit more than what I was saying here. With a very indirect proof. So you might say that's it. We have a nice theorem here. But there's still something nagging here, which is that how come that you get around this backward, forward orbits, and what are the relationships between two? And then there's actually a third thing, which is that you might describe the topological entropy in this setup by a real transfer operator. So there's the real transfer operator. There are much more general transfer operators on this. But in this particular context, it would simply be that you take L phi of y. And you say that it's the sum of phi of x, or all x which are primitives of y. You should act on some function space in order to start talking about this. So in the beginning, let's act in the beginning on, say, B, AB, which would simply be bounded functions with a uniform norm. And when you do this, then you realize that Ln1. So if you take the identity function, constant to get L1 equals 1. And you look at the value of y. It simply counts the number of primitives, of n's primitives. That seems to be a bunch of things I'm not defined. You have to exclude. I mean, I should say x in the domain of f, such as this is true. So I don't write it, but here x in the domain of f. Because I should correctly say, I mean, it's not defined everywhere. But you do take functions, I mean, in the synthetic sense, or up to measure 0. No, this is really just bounded functions. So there's no measure theory here. So this counts the number of n's primitives of y. And so we're looking again at backward images. And it's not so surprising to see that if you look at the spectral radius of this operator acting on bounded functions, it will simply grow in the same way as the topological entropy, like Zn. So this will simply be e to the h-top. Now, the spectrum in itself on bounded functions is completely without interest, otherwise. Because the spectrum here would just be e to the h-top, which is sitting here. But all this disk here will actually belong to the essential spectrum. So you won't really see anything else. So this is for the spectrum of this operator acting on bounded functions with the uniform norm. Now, if you stay with the bounded functions, then you don't see much. So what you tend to do is that you respect to a small subspace. And then you see some structure emerge. Some spectral properties emerge from this. And in particular, what we do is that what tends to do in this context here is to look at bounded variation functions. So we restrict. Restricting. In the trivial case of the identity map, for example, nothing happens. So it's the identity function. Yes. And then the topological entropy is 0. Yeah, but the identity operator adjust spectrum, which is? 1. You're right. You're right. Thank you. So in general, actually, yeah, it's true. It's contained in the disk here. And in many cases, you will get the full disk. You would need some mixing or something, right, a full disk? Yeah, probably, yeah. At least in this space here, it's very difficult to get some interesting spectral structure out of it. So let me say what happens when you restrict to the space of bounded variation functions. Now, in that case, it turns out that the essential spectral radius will shrink, at least if the topological entropy is positive. So let me. So the essential spectral radius of L, but acting now on bounded variation functions, in my context, will just be 1, so in any case. And if the topological entropy is positive, strictly positive, then you have the following picture here. We have a disk of radius 1, which in general will be the essential spectrum of the operator. Then we have exponential of h top here. And that will be largest eigenvalue. As in the other case, there might actually be on the same radius. You might actually have other eigenvalues here. You might also have some eigenvalues in sitting inside. And when you look at this picture here, then you see that there's actually some quite big resemblance with this here, and said that you have just inverted t and the eigenvalue. So in this context here, Balladier and Ketter in a paper 1990, constructed actually for this space of bounded variation function. They constructed a theta function. I call it a real theta function for the operator L of this type. Actually, they do it in a more general context with weight, something like that, but it also works in this case. When the map is expanding, for expanding maps, then in fact, the real theta function here and the arc in measure of theta function will be pretty close to being the same. So usually, you can choose them to be the same, but otherwise, there might be some correct factor. But this here has no poles, no zeros in the unit disk. And furthermore, for the url operator, for this one here and for this theta function here, there's an intimate relationship between the zeros of the theta function or the inverse theta function. And the poles and the eigenvalues of L. So L lambda is an eigenvalue, I should say actually, for lambda bigger than 1. So again, outside the essential spectral radius. Lambda is going to be an eigenvalue of our operator acting on PV, if and only if 1 over lambda is a zero of the theta function here, which is the same as the theta function here. And therefore, it is going to be a zero of the minus-thousand determinant. So somehow, the auto multiplicities coincide with orders of the zero. So that's really a perfect relationship between the zeros of that minus-thousand determinant. And then the spectrum of this operator here. Now, so this obviously indicates that there's some relationship hidden here between what is this operator doing and then what is this minus-thousand determinant doing. The operator, I recall, is using, in principle, backward iterates of the map. And minus-thousand are using forward iterates of the map. So it's a bit strange, actually. So looking for dualities. So I think Davity, a couple of years after the first appearance of these theta functions, started thinking about this operator, L, which is using backward iterates, should somehow be looked as, one should look at the dual operator, some dual operator of this operator, which should correspond to looking at forward images. So he did this in, I think it's like 93 and then published a little bit later on, 96. And what, at least with this my interpretation, which might be not completely up to standard, but he looks at some operators, some even integral of operators, which acts on BV functions. And then there's a kind of dual operator. Dual operator, let me write L tilde, which in fact takes bounded functions in the uniform norm and maps it into itself. Now, this space is unfortunately not quite the banner dual of BV. So a priori, I mean the spectrum of this and then of this is the kind of dual operator, you can't really say, you have to be very careful of saying something about the identity between these spectral values. But he writes that something which is very ingenious, which is to introduce a trace, what you call this sharp trace, written like this. You could actually tell that eigenvalues of this operator and eigenvalues of this operator, which are outside of some essential spectral radius would be coinciding. Showing that, say, let's call it peripheral eigenvalues of L and L tilde coincides. Again, this is a little bit indirect approach, but still, I mean, you can actually obtain some information. If you know something about the essential spectral radius for this, for instance, you can deduce something about this without doing the first calculations. So then, Palladier and Creole. So I think that was probably 95 and published 96 also. The years might be a little bit. Then started directly the Milner-Surston determinant, but with the weights and showed that the theta function of Ruelle, say I'll call it a Milner-Surston light determinant, had the same peripheral structures, meaning in particular they had the same zeros. So again, there's a few things here, which is that it was through the sharp trace. So they used this construction here. And also they needed some assumption on the weights that it should be continuous. That could probably be improved. And it was already improved actually by Sebastian Gutsel in 2001. He managed to say that the weights needed them to be continuous at periodic points. So that's sort of a first thing, but still the proof is somewhat indirect. And they did not quite apply directly to the Milner-Surston determinant at first. So then what I started doing and it was in two years ago, I was sort of studying, I was supposed to give you some lectures about Milner-Surston determinant. So I was studying it quite hardly. And then I started together with my wife, Tenley, and we produced a paper on this, sort of the standard approach, you can say. But it was still nagging my head for a long time. If there's not a way to see directly that the Milner-Surston determinant and the Röhrl operator had a direct relationship. So then what I found finally was that, yes, indeed, that is a relationship. So that was something I discovered last year. So there's a subspace, it's called it X0, of boundary variation functions, which is invariant under the operator L. And then having a dual, so in this time it's a real Banach dual, so X0 prime, which then would contain this dual here, but be somewhat bigger. In particular, actually, it would be allowing for representation in terms of real, nice L infinity functions. So allowing for representation. So I'll call it as, say, B0, which corresponds to L infinity functions, but on a slightly different space, I'll return to this in a moment. So here the functions are strict functions or I have to measure zero? No measure, that's also why I write small L infinity but on an uncountable space. So what happens is then that sigma, so the spectrum of L acting on X0, is the same as the spectrum of the dual operator acting on X0 prime. And because of this isomorphism, I'll talk a little bit about how it works. That's the same as the spectrum of the operator, induced operator on this space here. Now the option is that this operator here has a very explicit representation, so there's a calculation, which shows that L, now the last one here, may be written as an operator S minus PS. So the two operators, bounded linear operators, acting on this last space, such that S has norm one and P is a finite rank and the rank actually is related to the dynamical system, which is E plus one. So why does this help you? Well, it does because if you remember actually, so this is for the unweighted trans-operator. Yeah, yeah, so this is related to the fact that weights are equal to one. What goes to the rank, you know? Well, it could be smaller. Ah, excuse me, yeah, no, no, you're right. Yeah, yeah, no, no, then it's much, yeah. So it should be, I mean, it works for constant weights, but yeah. So what we are aiming for is actually to describe not the essential spectrum here, which we can't really do anything about, but what's going on outside. And then if you take some value lambda bigger than one, then you notice that if lambda minus L is invertible. Now, because of this construction here, where you have real isomorphism between the dual and the isomorphism between the representation, that's really the same now as just L minus L hat being invertible. But L minus L hat being invertible. So what does it look like? It's lambda minus S plus PS invertible and when is it invertible? Well, lambda is not an eigenvalue of S because lambda is bigger than one and it's a spectral radius one. So I can put it outside the parenthesis and say one plus PS lambda minus S inverse. But this here should be invertible. And finally, this was invertible, so I can remove it. So it's the same as this here being invertible, but P is a finite rank. So here we have a genuine finite determinant or finite rank map. And if one plus this finite rank operator is invertible, that's even only if the determinant on the image of P of one plus PS lambda minus S is different from zero. So if we call this something here, D lambda, then this, as I said, is a finite rank. So this really involves just a computing of a D plus one, D plus one matrix. And how does it look like? Actually it looks like determinant image P and then you might write it like one plus P and sum of S to the K lambda to the K. So if I have the explicit representation, I can compute this. And then the revelation says, it took me like three days of computations where I did some sine error or something like that. But then finally it shows out that D of lambda here is in fact, mean or certain of one overland. So this gives the precise information I was looking for namely that zeros of the middle system determinant are the same as the zeros of this here determinant, which are then the same as lambda being an eigenvalue. And then there's a little more complication showing that the order of the zeros will be same as the multiplicity of the eigenvalue. So we get a perfect correspondence between the say the Huell operator spectrum and the zeros of the middle system determinant. So this of course doesn't tell you much about how did I then do the computation and how do I get this space B zero out of my setting. And the idea is really to choose the right subspace. There are two things which actually help in this aspect. So first of all, I was talking about the map was not defined on critical points, but it's defined when you take limits from the right hand or left hand side. So the idea is to, you should actually make systematic use of limits of right and left limits. Systematic use of, well you can call them, actually point germs is one word. You may also call them directed points. And if you, geometry, you might also say that we're looking at the unit times in bundle are different ways of looking at this. So the idea is that if you have a point on the real line, then you would like to associate to this two directions, X plus, X minus. So X, call it X hat, sort of a directed point, would be something with an X and then epsilon, or you might put them together like this. And then the epsilon would indicate whether you're coming from the right, from the left. You also naturally introduce an order, which is that if X small on a Y, then you would say that they're ordered like this. So I intertwine actually the two, defining an order between the two. If you have a map defined partially say, so this is like just one branch of these things up there. Then in this example here, for instance, the right endpoint will actually be mapped to F of V plus. And the left hand would be F of U minus. And then we would call this say the extended map. So I would write if I call this U hat, then I would call this here F hat U hat. So you see that every time you have a branch like this, you can actually extend it to looking at map of point germs. And then you get a nice map defined on sort of all the point germs between the two. In particular, if I look at my original map here, something like this, then from A to B, you actually have three intervals, which now become point germ intervals. So you have I zero hat, I one hat, I two hat. You have the point germs from B plus to B minus, which are simply becoming now partitioned into I zero hat, I D hat. How we found exceptions, we found. Exact, exact. So now this is actually becoming a perfect partition. And the reason for doing this is precisely that the extended map here is actually defined everywhere now. We also have these signs. So I didn't actually talk too much about this, but the signs mapping A plus to B minus. So that would be mapping to plus or minus one, which are sign of monotonicity of the branches. So here to plus one, minus one, minus one. And then we have the product along orbits, which are the product of these signs along these extended orbits. Now finally, and this is, maybe then we get some. Finally, the minocertan determinant is actually called a kneading determinant. And the reason is that they look at relative positions of orbits of the critical points to the critical points themselves. And there's something called kneading coordinates related to this. So if you take C somewhere among the critical points, C one up to C D. And then you would like to say that a point germ is either to the right or to the left of the critical point. And the way you write this is to write plus one half if X had this bigger than C, minus one half X had less than C. Now, one of the important things about this notation is that you never get zero. Now, when T is an auxiliary variable, then you construct for any initial point, you construct the kneading coordinate, which gives you rise to a kneading invariant of minocertan. Now, it's not quite that formulation, but it's equivalent, which is that you look at a generating function. You look along forward orbit, as I mentioned. You take this sign of monotonicity along the orbit and then you multiply the relative position of the last point, relative to the critical point in question. So this is their construction. And then they write a matrix, M i j of T, which is going to be the following. So I'm gonna write it down, C j plus T minus theta C i C j minus T. Now, we do that for J equal to one. So for the internal critical points, I define this. And I define for, oops, A plus T plus theta C i C minus T. For J equal to zero. So these look like some magical things just popping out of nowhere. And at present, that's the case. So this is, so that there are a certain number of critical points. So this is D plus one times D plus one matrix. And then it happens so that the minocertan determinant of T is the determinant of this matrix. Now, if you look at this as a formal power series, then the coefficients here will always be plus or minus one. Here we'll get plus or minus one half. If you look at this here, coefficient will be plus or minus one or zero. So coefficients all like plus minus one or zero. So these define nice analytic functions in the unit disk. And this here becomes an analytic function in the unit disk. Now this is the minocertan determinant, but in the version which I have developed reasonably with the 10 layers. So then you could go on to prove why the minocertan actually works, but that's not what I want to do. I want to say, what is the relation with this weird subspace that we're talking about? So here it's analytic, complex analytic, or what is the function you are? Coefficients are all real. So analytic in either sense, but here it's just analytic in the variable T, complex, smaller than one. I mean, these all have values plus or minus one half. So every coefficient in these powers here will stay bounded. And so you get an analytic function in the unit disk. I should say by the way that one thing that you easily proves that the value at zero is one. So indeed you will get that zeros will be isolated and things like that. So what is the function space that was interested in here and each dual? That's more interesting. So if you look at phi, which is of a function defined on the interval AB, then recall that the variation of phi, so that would be the supremum, sums of phi of x i, phi of x i plus one. And supremum speed taken over all possible finite points, choices of points. Between A and B. And you say that if phi is of bounded variation, even only if this number is finite. In that case, in particular limits from the left, right on the left exists. And you might define the property, the quantity which is phi of A plus and phi of B minus. It turns out then that a normal, a suitable norm on the space of functions of bounded variations is simply taking the variation and then plus absolute value of this boundary value here. Value from left, say right limit on the left hand side and left limit on the right hand side. And this is a norm on BV. What is then the small subspace that I'm looking for in BV? Because BV actually is too big. It's too big in order to do the direct analysis. And one reason being actually the dual is too small. So the small subspace I'm looking for is a space which is, first of all, is defined as functions which are simply locally constant except for finite number of points. So let me just give one example. One example would be to say that you take a function which is say one here, then it becomes minus one there at alpha. It becomes one at beta, and then it becomes two to the right of beta. So that would be one example of a phi in my space. And why do I give you an example like this? It's simply because when you look at these kind of functions here, then you realize that you may describe them in terms of the jumps. So you have a certain number of jumps here. If you look at this function phi, I mean describe it in terms of say elementary jumps. Let me define this for u hat in a plus b minus. I will define a jump function which is one half. If x is bigger than u hat, minus one half, x is smaller than u hat. Now if you look at this function, then you realize that actually it's very close to the one I had somewhere else over here, except that you have in fact inverted just the rows of the base point and the directed point. But the function phi that I have here for instance, may be written as say from a given value here you jump two values down. So it's like minus two sigma a minus. Then you jump two up here. So it will be sigma minus. You jump one more here plus. And then finally you add a constant value which is three halves. And I might write this as three sigma a plus. So this function is actually just the constant function one half times one. Now any phi in my space may in fact be written as omega i and then sigma u i hat for some final combinations of this. And I should still add omega zero times sigma a plus. Okay, so any simple function of this type can be written like this. And then you check that the norm of phi is simply the sum of the omega i's. So sum of omega i, i greater than two one. So this is like actually a small l one space. And what we do is then simply to let x zero be the closure of s in bounded variation. So the small subspace that I was looking for in order to do this trick here, this is a closure of these type of functions in bounded variation. And why is that good? Well, it's because it has a nice view. So x zero bar prime would be the Banach dual. And then given any l in x zero prime, I would define a function now on the space of point germs to be l acting on sigma u hat. So you take any sigma u of this hat of this type, you act with l on it, and then you get a number. I don't have the time to prove this, but then there's a very simple proposition, which is that if I take l in x zero prime, map it to l hat in the space b zero. So there's one little thing which is that on a plus and b minus, you have to get just the same value but with opposite sign. So I'm looking at h in bounded functions on a plus b minus, such that h of a plus, h of b minus is equal to zero. This map is an isometric, an isometric isomorphism. The b here being equipped with uniform norm. Finally, so I should actually have stopped here but five minutes. We have to ask. So I'll just give the principal lines of what's going on because we were looking at the dynamical system where you had ik, which actually was from ck to ck plus one. We had a map fk, which then maps ik to ab. I have a sign of monotonicity. And then we define a transfer operator. So the transfer operator I defined in the beginning which was given by Ruel was in the case interval here, I can write it in a slightly more complicated way which is that I take phi, I compose for the inverse of the cased branch, except that I have to be in the image of the cased branch in order to do so. And now you look at how this actually acts on certain functions. In particular, if I act on a function which is in my space s, it suffices to look at what happens when I act on a simple base function my sigma u hat there. So let me give you a drawing. So sigma u, sigma u does the following. It looks like this. If I put say this point here, then this would actually be sigma u minus. And what is lk doing to this? Well, you simply look at the dynamics and you see that you will map the interval to f ck, say let's say f hat ck, f hat ck plus one minus. Now the part which is outside will be chopped off because it's not in the image. So here you'd get zero. Then the point u here will be mapped somewhere here and this would give rise to something like this if I have preserved the orientation. And then here you will get the value one half and then you will be mapped to this point here and again you'll get chopped off outside. So what happens is that lk acting on sigma u hat which after a long computation, you realize that when u is inside the interval here, you get sigma but at the image point but you get two corrections term because you have seen your chopped off twice. So you get plus sigma bk of u hat and then you get sigma fk hat b, oops, sorry, that's ck plus one, cfk ck plus one minus and there's another correction term coming from the other part which is ck u hat sigma fk hat ck plus. So this took me, as I said, like three days to get the science right but after I did that, then I came to the say nice conclusion which is that if you look at this expression here, then here you have a function that depends on u. I mean there's a variable here y which I didn't write but this function here does depend on u. Actually this function here does not depend on u. It's just the same function all the time. This is the same function all the time. So this here will be the finite rank perturbation and this here will be an iteration by the map if hat. So let me just say that sk h of u hat I defined to be sk h of fk u hat and pk, well, I'll have omitted the precise forms but it's involving the correction terms there. This defined will be a rank two operator and then finally the lk hat which corresponds to how the lk acts in my dual space is simply given by sk minus pk sk. Finally, if I sum all the branches, I get an expression of the following form. So s minus ps as I was indicating up there and the s operator will simply be s hat of u hat h of f hat of u hat. So very simple form for the part which is using iteration of the f hat and the p operator is a little bit more complicated but it's say h of u that will be sigma a u hat delta a of h plus and then there's a sum sigma j of u hat delta cj of h. And finally, what are these delta cj h that's h of cj plus minus h of cj minus. And there's one exception which is delta a h which is h of a plus plus h of b minus. So the upshot is that you get l is the sum of one operator here with by evidence has spectral radius one because this is just plus or minus one and you're working on l infinity. So iterating here doesn't matter. So this has spectral radius one actually norm equals to one. This here is well it's mapping to d plus one functions so it has rank d plus one. And then finally you do the computation and I won't do it but you will actually drop into the expression which is has now disappeared. Yeah that has disappeared but you will drop into the expression that if you calculate this determinant here you will simply get the same as the mean of certain determinant that defined before. So when you finally reach this point here what are then the conclusions you can draw? Well first of all it's nice to see that the intuition that you should have about the dual real operator being really related to mean of certain determinant is correct. And now if the top logical entropy is positive then just from the fact that we know that the real operator is positive implies that the exponential of h top is an eigenvalue of the real operator and it is the largest eigenvalue of the real operator which means that you get part one and part two of the mean of certain theorem for free without looking at any theta functions. So I mean the mean of certain theorem actually drops out as a simple corollary of the fact that this is a dual determinant of the real operator which is positive. And finally the fact that it is a regularized determinant of the dual real operator was just very nice to see when I came about it. So thank you very much for your attention. Time for some questions. Vivian and I tried to make an extension of this in the first in theory allowing varying coefficients. One other thing would be to try to find other extensions what one uses as a basis is really the order structure of the reals. Is there some order structure that one might use instead? One thing that sometimes works is to use the conformal structure involved in one. In the complex plane, yeah. Yeah, I remember you also had a paper with Sims I think, you and Vivian, Steven Sims. No, we had paper with Sims and Kitai and all these. But it didn't go very far perhaps. But the fact that, I mean in some sense the fact that working on the dual space, you also remember this paper of Levin Soutin and I forgot, yes. For the real determinant for hyperbolic Julia sets and what they did is for certain types of weight, they could actually, instead of looking at the sum of pre-images, they could look at images of the critical point and then make a determinant out of this. So somehow there's this duality between the operator and then the dual operator and the determinants you get from it is very intrinsic in some cases and sometimes you might find it and sometimes you may not. So really it's connected to David's question. I just want to point out that what you're doing here is you're really writing a regularized determinant. Sure. Because P is finite. Yeah, so it's a regularized determinant of the dual real operator. Which is the min assumption. And actually you seem to want to avoid the zeta function. But if you are interested in the zeta function, you can also study the zeta function by. You get it directly from this. In any dimension, and this is what you did with Tsuji actually. So you can write the regularized, I mean you write the transfer operator as a bounded press, an operator which has a power which is press class, and then you can mimic this quick action. But then you don't have this, you don't have any more than needing the determinant. But you have a shortcut to just point out that this computation can be used. It's used in other cases. Yeah, the question may be on the same line. Do you see or you don't see the periodic points appearing in the computation or at least in the definition? So in this approach, no. But are there still, can you get information on them? I would say the best information comes from Bellady Kehler. Which is that if you work on the real space of bounded variations, then they have this paper showing that you get a set of function which comes essentially from periodic points. And that has the same as eigenvalues as what are being shown here. And if you do that, you can also show that I work on this smaller space but I get the same eigenvalues. And the eigenfunctions that they would get in their space, this is in fact in my space. So that's consistency. I mean, I don't get more eigenvalues and they don't get more eigenvalues than I would do in this way. But the periodic points, I don't see it all here.