 Thank you very much. So it's a great pleasure to be in Trieste again and I'd like to thank the organizers for the opportunity to be here and for the invitation to speak and So I want to talk about something which is very easy It's about some properties related to a fairly straightforward measure on a very very simple space But in fact the measure is the so-called Kazulka measure and so the hardest thing About proving results is actually knowing what it is and so I will give a definition In a few slides, but I want to kind of creep up on it Just by way of audience participation does anybody recognize where the location is of this picture excellent It is of course in Trieste It's near the canal and and the person with the the the worst pallor is me But the other person is James Joyce or at least a statue of him. I assume that he's probably buried somewhere Okay, so I want to talk about The Kazulka measure and in fact when I define it, I'll actually define it on a Sub-shifter finite type in fact a full shift on three symbols But its origins actually come from Fractals and so in particular I want to just creep up on it by talking a little about Fractals and in particular the siropinski triangle mainly because it has nice pictures no other particular reason and To define the measures as I said it's easier to code the siropinski triangle using sequences and Then define the measure in that context So the game is I'm going to talk about this measure its origins are as a measure sitting on the siropinski triangle In the plane, but I'm going to view it in terms of a space of As a measure sitting on a space of sequences just Infinite sequences of one twos and threes kind of easy The problem is that Usually if you look at measures on sequence spaces you like them to have nice properties you might want them to be Gibbs or equilibrium states or something like that and It's usually usually easier to prove things in that particular context But this measure when I get around to defining it is not a Gibbs measure of that type It's a invariant measure for the shift, but it's not a Gibbs measure as we know it And so this leads to some complications if you want to prove stuff about it But if you are more of an optimist then you could say it's an interesting class of measures to which previous techniques don't apply and so consequently it's interesting to study for that viewpoint and Also, if you prove stuff about this measure, it's supposed to have some applications to the original context where the Kazalka measure was introduced in the context of measures on on siropinski triangles and The good news is that although this measure is not Gibbs and In particular, we can't apply the usual techniques. We can move stuff around We can modify the ideas and there's analogous things that we can prove And indeed we can establish the sort of things you'd expect to be true for a Gibbs measure in a hyperbolic system Just we can't follow the traditional pattern. We have to do something a bit different, but that's okay. I guess So that's what I just said. So here's some context for no particular reason so in the first talk of this conference Keith Burns was talking about measures on For for geodesic flows and in his case the geodesic flow had the property that it was not sort of uniformly hyperbolic But the measures were quite nice because they were from a particular nice potential a holder potential and In fact, this also appeared In the talk of Lima on Tuesday where he was also talking about non hyperbolic systems flows in that case and and Nice regular potentials giving you the measures and so I'm doing the opposite in some sense I'm going to look at a nice dynamical system, but the potential which defines the measure will not be so nice So the measures so I'll be looking at the simplest possible hyperbolic system Such of the finite type in fact a full shift on three symbols and for this very simple Dynamical system, which is as hyperbolic as you could want It's we'll be looking at a measure, which is associated to a non holder potential and non holder in a very Bad way. It's not even continuous in any reasonable sense, okay, so that's that's the game so instead of looking at More general systems, but regular potentials We're going to look at reasonable systems very nice systems But with less familiar Potentials because that's the way it comes and as I said these measures originally appeared in the context of Fractals in particular things like the siapinsky triangle gaskets things like that So let me just whittle away some time by saying some words about that. So the usual construction of The siapinsky triangle as you just take a triangle in the plane and then you simply remove the middle triangle So that's the picture there and then you remove the the middle triangle of each of these pieces and you keep on going and so it's just a two-dimensional analogue of the The usual construction the middle of the kentel set for example But we're in two dimensions where you're taking out triangles rather than the middle of the kentel set where you take our intervals and So that's the construction and at the end you get some fractally looking picture, which looks like that and You can Do stuff with it compute its housed or to mention easy of course And other stuff So this is just a traditional construction So siapinsky is is now mainly associated with the triangle But in fact he was a very distinguished number theorist and that which is sometimes now forgotten But particularly by people who do fractal things and so here is a picture of the great man Which I just mentioned in passing But I won't be looking so much at the siapinsky triangle because to define measures on dynamical systems It's usually sometimes easier to actually look at sequence spaces So we can code the the siapinsky Triangle just by sequences one two and three and these just correspond to the three triangles That are left after we take out the the previous one So at the top we take the big triangle and then there's three pieces here And there's three pieces here and that's what's going to give us the coding so the usual coding is Given in a certain way so I look at the space of all sequences of ones twos and threes and I can turn it into a metric space Anyway, I feel like doing it, but in particular I can define a metric which may or may not look something a bit like that So basically sequences are close if they agree for a long time In the first few digits and this is a very simple metric to define on the space And then we can associate to every infinite sequence a point in the siapinsky Triangle just by the coding of it which basically means that hopefully you take an infinite sequence and you write down some Expansion like this where you take the right vectors these may be the right vectors These are my guesses at what the right vectors are and it's pretty much the same as if you take the middle third Cantor set There you just have two branches So you take a dyadic expansion of points in the unit interval and you just skip anything that had a one in the expansion You just take zeros and sorry Yeah zeros and twos in that case here I'm just taking this and I'm expanding points in this way so every infinite sequence in here corresponds to a point in the triangle and Most points in the triangle Correspond to some sequence upstairs. There could be some lack of Bijection, but it doesn't matter so much in what happens and of course There's also some dynamics, which is more apparent or more familiar for the shift Then it is for the the siapinsky Triangle and the dynamics is just the usual left shift map. You just take an infinite sequence index here from zero to infinity and throw away the first term shift everything to the left and it's the usual Shift map and if you were to write it down in terms of the coding of the siapinsky Triangle, it would just mean that you blow up each of the smaller triangles by a factor of two until they overlap Okay, but basically we're just interested in this familiar object space of sequences and the shift map Very easy to to deal with Okay, and so if you want to define the Czorka measure, it's an example of an invariant measure on the space of sequences for the shift map and Usually the easiest way to define a measure on spaces of sequences is to define what the measure is of a cylinder set so just a subset of the Space of sequences So it has a nice topology coming from the metric you look at the Borel sigma algebra and Within the Borel sigma algebra you have these open sets which are simply the cylinder set index by a bunch of numbers and The way that they are Defined is it's all the sequences which happen to take these values for the first end places So it's a standard definition of a cylinder It's simply the guys which have the property, but they look like x0 up to xn minus 1 in the first end places after that you can do what you like and that's what gives you the cylinder set it's a whole set and These things are enough to to generate the it's a sub basis of topology so they generate everything and Consequently if we know what the measure of these guys are it's it defines the measure of on the entire space basic basic stuff and so in particular if it's invariant it means if we shift backwards and We look at the set we get we want its measure to be the same as the first set So hopefully it's the same as this condition So it's a shift it will interested in shift invariant measures which we can define just by saying what all these sets look like these cylinder sets Okay, and so the most obvious example. I like to do obvious examples It always gives me more confidence when I get on to more complicated stuff So the obvious example is you take some Bernoulli measure you can take the one-third one-third or one-third Bernoulli measure so in that case The measure of each of these cylinders is just one over three to the length of the cylinder And this gives you a measure which is well-defined. It's invariant and somehow it looks like the natural measure to look at on on at least on the siapin ski triangle when you code it down, but it's the easiest measure you can think of on the space of sequences and You can generalize this definition and you can look Gibbs measures and so by a Gibbs measure I mean that you take The measure of you see have a measure mu and it's going to be a Gibbs measure if If you have the property that if you look at the measure of one of these cylinders And you look at the measure of the shifted cylinder. So this is one way around all the other Then you take the limit as n tends to infinity and if it exists for every sequence x x is one of these infinite sequences x 0 x n off to infinity So if this limit exists for every point x in the space and it's holder continuous Then you say that this measure is a Gibbs measure. So it's a general class of kind of useful measures to look at on a sub-shift of finite type and if you take The obvious example, it's an example of a Gibbs measure Simply because when we look at the previous definition and apply it to this measure or the measure of this cylinder would just be one over three to the power n plus one I guess and the one on the bottom will be one over three to the power n and you take the limit and You get that You get that the measure is But the ratio is going to give you a third and then you take the log and And it's minus log of three. So this is a Gibbs measure for a constant function. It's not the most exciting of Examples if you look at the next most obvious example, you could take a Bernoulli measure and In the case of the Bernoulli measure we can choose different weights for the three symbols one two and three and So I'll take the weights to be simply p1 p2 and p3 And so we define the measure of a cylinder which therefore once we know the measure of every cylinder It uniquely defines the measure on the space of sequences and we define it simply to be a product of the corresponding weights P whatever p whatever p whatever where the whatever's are simply the terms of defining the cylinder So it's a standard definition and if you carry out this analysis correctly before Then the associated potential is just something which takes different values Depending only on the first coordinate and these values are something like log of p1 log of p2 and log of p3 So in the case that they're all the same p1 is equal to p2 is equal to one third then we should get the same value as above and this defines this explains why Bernoulli measures are Examples of Gibbs measures with the previous definition. It's because this is a continuous function Defined on the space of sequences. It's a locally constant. It only depends on the first coordinate Because of the way that the metric was defined This is going to be continuous. Okay, so this is very easy stuff easy ish stuff and As I said, I mean generally one talks about Gibbs measures or equilibrium states The words tend to be interchangeable nowadays. Well, they have a slightly different meaning In the context that these potentials are holder continuous because then you can apply a rich theory Which dates back to such people as Sinai and Ruel to prove all sorts of things But in the case of a Kazulka measure it does not have this property It's a measure which is defined on the space of sequences So in particular, we can actually say what the measure is of each of the cylinders That's how we define it But when we try to work out the potential it turns out that it's not a holder continuous potential which Causes some problems. So am I going to tell you what the Kazulka measure is? Yes So here again, I'm just saying that in order to define the measure on the space of sequences Ones, twos and threes infinite sequences I need to specify what the measure is of one of these sets one of these cylinder sets So for every choice of I zero to I n minus one where n could be arbitrarily large I want to say what the measure is of this particular cylinder set and In the case of Gibbs measures usually there's some function hanging around Like in the case of these Bernoulli measures, but in this case it's actually defined using matrices And so instead of giving the general definition I'll start off by talking about the classical definition, the classical example And the game there is that if I want to specify the measure of this particular cylinder So each of these I zeros up to I n minus one is just a number between one and three It's one two or three So there's going to be presumably three to the end different choices of cylinders of length n And the way I do it is I look at the indices the things that define it So each of these guys is an I zero Each of these I zero I one up to I n minus one is either one two or three I look at the corresponding one of these three matrices up there and I take the products of the matrices So in the case of Bernoulli measures you multiply numbers together Here I'm multiplying matrices together And then I do this twice, actually it's the same matrices So I look at the product matrix, I take the transpose of it And then I sandwich in between it this extra matrix So I'm defining my Kozolka measure on the space of sequences of ones, twos and threes The full shift on three symbols And the way I'm defining it is by taking a product of matrices Corresponding to the indices of the cylinder And the hope is that this measure should be well defined And also that it turns out that it's shifting invariant In this case you can just do it by some sort of computation In the more general case, which I haven't said yet It follows from the more general definition But it gives rise to a measure which is a probability measure defined on the space It's also a sigma invariant So if you're an ergodic theorist you might ask the question like Is this measure ergodic? And the answer is yes, it was proved in 1989 By Kozolka that the measure is ergodic 1989 incidentally was the last time I was in Trieste Prior to this week I'm not sure if that's fate or not And I haven't actually said much about why this measure was introduced Or who cares about it Well the people that care about it most are people that study Laplacians But these are not people who study Laplacians on Romanian manifolds These are people who study Laplacians on fractals So for example there is a theory of defining a Laplace operator A Laplacian on, for example, the Sierpinski triangle And the way it's constructed and the way it's defined Works best with a certain class of measures Of the so-called Kozolka measures So up until yesterday I had three slides following this Telling you how the Kozolka measure appears A bit of its history and something about harmonic measures and functions On the Sierpinski triangle But I decided it was better to shift them to the end of a talk In the hope that I wouldn't get there And talk more about the ergodic properties of this measure So let's for the moment just say it's a funny measure With some connection to a different area Which is a ergodic shift-invariant measure On a well-known friendly dynamical system A trivial dynamical system And we can start asking questions about it And so if we were given this definition The first thing we might ask is this measure a Gibbs measure That is is there a nice function psi With the property that when we look for the potential function For it Does this potential function have nice properties So if it was holder continuous Then we'd have a Gibbs measure For the potential it was holder continuous For this Kozolka measure Then the good news is that we could just roll out The classical theory And we could prove all sorts of things about Central limit theorems, large deviation theorems Stuff like that And it would be easy But the bad news is that the thing isn't continuous In fact there's a result from last year Which shows that in fact this function has A dense set of discontinuities So not only is it not holder continuous It's not very continuous And so this is my attempt to evoke the fact That it has a dense set of discontinuities It's simply something I plotted in Mathematica Which conveys nothing in particular Except it's meant to look like something Not very continuous The limit exists almost everywhere Yeah it exists almost everywhere The limit but it's not With respect to itself Which is not overly helpful really But it doesn't have any good topological properties In the sense it's not holder It's not continuous It's not whatever you'd ask for after that It's a simply defined measure Which has bad properties in terms of this potential Okay so therefore One wants to still prove things about it And one has to develop some other approach So our first result Or maybe our main result even Is that not only is it agotic But of course it's also strongly mixing And so strongly mixing simply means That if you take any two functions F1 and F2 And you shift one function by the shift map Which is the dynamics here Multiply it by the other function And then compare it with the The integrals of the first two functions So this is a correlation function Between the shift of the function And the second function Then this thing is strong mixing So in particular this quantity tends to zero So the dynamics kind of spreads out In some nice way And moreover it's actually Exponentially mixing in the sense That if we choose these functions to be more regular And in this case we can choose the functions To be Lipschitz say Still doesn't say anything about the potential Because the measure is given to us But when we test it with these Lipschitz functions It transpires that it has Exponential decay of correlations A fact which of course is very easy to prove In the context of Gibbs measures But it's not so easy to prove in this context And so the rate of decay is given by this alpha Which is a number between zero and one And in this particular case It's anything very close to five over seven This is just an explicit calculation Because it's a very explicit measure Okay so it's a result It's a result that says That not only do we have a periodicity But we have strong mixing And in particular we have mixing Which is exponential fast In fact once we have strong mixing We have things like exactness And stuff like that But let me concentrate on that This is a picture of my co-authors Anders Johansson and Anders Uberg This is Anders Uberg I think it's at the Uppsala train station Which is now some sort of restaurant And the picture on the right Is a picture of Anders Johansson Who does not have many photographs But you can track down There's a picture of him explaining something Very patiently to me Which usually he has to do very patiently Because I have trouble following what he's saying It's usually complicated and he's a smart guy Anyway so these are my two co-authors And this is the result And of course once you can prove Things like exponential mixing The same method or at least a direct application Of the result gives you a host Of other kinds of results So let me just mention Three kind of things That one would want to do And so once we know That the measure is a godic Then of course it means That you can apply the A book of the godic theorem And the book of the godic theorem Simply says that You have the dynamics for shift map You have the measure mu Which is this fancy A Kazulka measure Defined in terms of these funny matrices And then the book of the godic theorem Tells us that the temporal averages Are the same as the spatial averages That is you average along an orbit And you get the integral Of the function over the space For almost all points x It's what it always says basically So here's a picture of Corino Who actually gave the book of the godic theorem As the definition of ergodicity Which is even easier And once you have the godic theorem Which is just a beautiful Classical theorem You can ask how you can improve it And the classical improvements Are you assume more about the function f If you keep assuming the function is f1 You can prove very little more generally But if you assume that the function is more regular Maybe lipchits Then you can ask if you can prove things like Central limit theorems Which are different kinds of averages Or large deviation results Or if you're particularly attached To the book of the godic theorem You can ask if there's an error term In this convergence So these are three kind of classical things People tend to ask At least in a godic theory Or a smoother godic theory Once you've established ergodicity So if you know that the measure is a godic Can you prove these results as well And the answer to all three of these is yes There are analogs of these results In the case of classical Gibbs measures The newly measures Gibbs measures with all the potential These things again are very classical And easy to prove nowadays But in the context of Kazulka measure It's a bit more difficult So here again is just a statement Of the book of the godic theorem So we're taking the shift A full shift on three symbols An old friend We're taking the Kazulka measure A new friend Which we know is a godic So we know that this theorem applies And then we want to prove A central limit theorem So central limit theorems work on the principle That somehow you want to replace The 1 over n By a 1 over square root of n For example So you want to look at how things Fluctuate where the scale is changed By from n to 1 over n So here is the same summation n equals 0, n minus 1 So the summation along the first n points In the orbit for a typical point x But instead of dividing by 1 over n We divide by square root of n And then we ask well What proportion What measure of points Has a property But this particular average Is within this range Alpha to beta Of the integral So this is what central limit theorems look like And possibly the normal distribution Looks like that I always write this down from memory And usually get it wrong But anyway it should converge To the normal distribution As n tends to infinity And indeed In the case of the Kazulka measure This is still true So the central limit theorem Which you can think of as an extension Of the work of the godic theorem In some sense It's also true In the context of these things And the other theorem I mentioned The second theorem I mentioned Was to do with large deviations And so again It's a variant on what happens With the book of the godic theorem So the book of the godic theorem Sells as I said That for typical points You have the averages Converge to the integral Classical result And in the case of large deviation results You ask well What proportion of the sequences What proportion of the space Has the property that this difference Is greater than some fixed bound From where you're going So you know that These averages are converging To the integral of fd mu You look at the difference Of these things And you ask well How long do I have to wait So that the proportion of the space For which this is bigger than epsilon Is smaller than some particular value And these large deviation results Typically say that the measure Goes down exponentially fast So our bound here is fixed The epsilon is fixed Bigger than zero But the proportion of the space That haven't quite made it yet Is going down at some exponential rate So it's another kind of statistical result N capital which one That one You are perfectly correct It should be a capital N So let's not go back to that Yes So all the ends that should be capital ends Should be capital ends And all the ones that shouldn't be Shouldn't be So yesterday or maybe the day before I read through the slides And there were endless t's That should have been sigma So sometimes my transformations were t's So there are some It's like when I try and speak Italian When I only speak Portuguese There's some loss in translation Between these two settings But James Joyce certainly helps I can't understand him in any language Okay, so again The large deviation result could be thought of The kind of generalization A more sophisticated version In some sense of the Book of Egotics theorem And it also holds in the case Of the Kazulka measure And this is what the statement would say More or less Level two, type two Large deviation results Involving measures About which I will say nothing So that's two kinds of generalizations If you like of the Book of Egotics theorem There's the central limit theorem There's a large deviation result But the third thing I mentioned Was error terms for The Book of Egotics theorem So yes again, third time Maybe even the fourth time Here's a statement of the Book of Egotics theorem We always have this result But by ergodicity If we average some nice function L1 is enough in this case Along the orbit The average is converged to the integral But you might ask Well, for typical points Can we say how fast it converges? Well, if we assume that the function Is more regular And in shift spacers Lipchitz tends to be A good notion of regularity Then the answer is yes And so in fact You can prove the following Your function here is Lipchitz On the space So it's got some nice Regularity property Then in fact The difference between these two guys Tends to zero at some rate And in this case I've got the N correct It's uppercase N So it goes something like this Which has got a large number Of confusing logs on the top But if you ignore those It basically says That the thing goes to zero At least as fast as One over square root of N Minus something So it's an error term In the egodic theorem And of course Since it's an almost everywhere result The O here depends on your point There's a constant implied in this Which of course depends on the X At which you're applying it So you can write This is true for almost all points X And in fact The proof of these results The proof of the last result The error term is extremely easy And it's an immediate consequence Of stuff to do Just with the exponential mixing In fact some reasonable rate of mixing And a bit of spectral theory Stuff that doesn't appear in courses So much anymore But it's kind of straightforward to prove So these are just three applications Of the ideas to do with Exponential mixing And how it percolates through To egodic theory and egodic averages And because we have these results On exponential mixing Or at least some machinery That does that Then these three applications Apply in the case of the Xorca measure Which unfortunately is not a measure Which is a classical Gibbs measure But at least has these sorts of properties So let me say something about The strategy of the proof I always hate talking about Proofs in talks I always have this phobia That someone will suddenly point out a mistake But that hasn't happened yet It's happened to other people In talks I've been to But not in mine So what happens here is that We'd like to define We'd like it to be a Gibbs measure But it isn't But we can still define This candidate for the potential We just take the ratio of these guys And take the limit It's basically a Jacobian As it used to be called in the old days And it's not as good as we want it to be But if it was a nice measure Then we'd know what to do We could apply classical Ruelle perintrabenius techniques Which sometimes are now called Transfer operator techniques And basically we take the shift map And what we do is We just average over the pre-images With this potential And then God would be kind to us And this operator would have a spectral gap And this spectral gap would imply mixing So this is what everyone does In courses nowadays And then just using the usual duality Between the transfer operator And the shift map You get the result So that's what we would do If it was a Gibbs measure And the game here Is that we can't use Lipchitz functions Or holder continuous functions On the space But we'd like to follow the same strategy But we'd like to introduce A different space of functions One that works, for example And we'd also like to prove That the corresponding operator Still has a spectral gap Because then we will be able to do Just what we did before Okay, and so What is the space of functions Which I'm going to call b I think b is meant to invoke Banach space for some reason But that's okay So just by way of notation I'll let a of n It's the finite sigma algebra Corresponding to cylinders of length n So basically it's just Some collection of sets Which is nested As n gets bigger These things get smaller And then I want to associate The expectation operator Associated to these finite sigma algebras And the word expectation operator Here sounds extremely pretentious All it means is That it's the linear operator Which takes a function Which is defined on this space So it's just defined on the entire space With respect to the Braille sigma algebra And it gives you a Locally constant function And the locally constant function Is meant to be one Which only depends on The first n coordinates And how does that work Well you look at the first n coordinates Of wherever you are And you just average the function Over that And that's what you get So it's just an approximation To your given function Which is given in this way And classically it's just An expectation operator And the Banach space Well it's going to be Something like Holder continuous functions Except it's got to accommodate These discontinuities And so the way it works Is that we take some function f And we look at these approximations to it With respect to the expectation operator And we look at successive approximations And so the idea is These things are getting smaller And we ask that they get smaller At some sort of exponential rate So if I was going to say Something vaguely pretentious I'd say it was something like Moving from smooth functions To a sobble of space Where now you're using integration Rather than supremums Or something like that I'd use some words like that But I won't I would just say That this is some way of defining a space Which accommodates these rather bizarre potentials And the potentials psi Which I defined before Associated to the gazolka measure The thing that would have been a potential If it was Holder continuous Sits inside this space Which is kind of reassuring And it has some sort of norm Which is good And it's just saying Something about regularity Let me not say anything else about it This space, of course, contains The Lipschitz functions In fact, all Holder continuous functions If you choose suitable theta Just because it's measuring how things Approximate in an L2 sense And if they're Lipschitz They actually approximate faster than that So it contains Lipschitz functions So we can apply it to Lipschitz functions If we want to prove stuff And it has the property That the operator preserves the space Which is good Otherwise it's a bit useless introducing it So we have our gazolka measure We associate this hopeful potential Which is not Holder continuous But it's relatively nice Because it lies in this space B The old familiar transfer operator When we try to do it The analog preserves the space It has to fix the constants Because that's always defined And in particular if you quotient out By the constants Then you get that the spectral radius Of what's left is smaller than one Or if you like it's got a spectral gap So the original operator acting on This funny Banach space has a property It has a maximal eigenvalue at one If I do it that way around At one And then the rest of the spectrum Is in some ball Which is strictly smaller I is spectral gap And that's exactly what you use To prove all these results You might hope that the way to do this Will be to follow a traditional path Which is to use things like the Sotter York inequalities and things But that doesn't work too well either And so in fact the method of proof Is not actually to work directly With this Banach space But instead to look at a space Of matrix valued functions And that's simply because the measures Are defined using matrices And it turns out to be easier to do it that way But philosophically these results Are like the setting where you look At more general functions These things defined using expectation operators But rather than proving things directly With them you actually have to at the moment To go through this more indirect method So now I can't remember what's On the next slide It says 18 out of 23 So generalization So let me say the following So I define the gazolka measure Using one, two, three matrices Called A1, A2 and A3 And a third matrix called curly E And so if there was only one such measure Perhaps it wouldn't be so exciting But in fact it's part of a general class Of measures which are defined By just looking at matrices And so more generally you do the following We look at a full shift on K symbols Before K was equal to three Now it's equal to K And it has the property that these matrices Well before they were two by two Now they can be D by D And we also have a positive definite D by D matrix epsilon I'm not sure if the last one was positive definite But maybe it was And S matrix of course is some Typographical error which should just say matrix And given these objects You require this property Which is you take the matrices And you multiply them together And you end up with something which is identity And if you use this epsilon And you do it the other way round With the products of matrices You also get the identity And there's an extra strong irreducibility condition Which is something about taking the matrices And there shouldn't be a finite number of hyperplanes Which are preserved by them Just skip To define the measure in exactly the same way So using this finite collection of matrices You cook up this newer class of measures Funny class of measures And it's done in exactly the same way We define the measure of a cylinder Just by looking at the indices And for these indices we take the products Of the corresponding matrices from the list Take the transpose Multiply by the other matrix And multiply by the matrices we first thought of And when we end up with a well-defined And shift-invariant measure And these two conditions One way round or the other Give you the properties that are well-defined And invariant That's why we have to ask for these two conditions No, there's no relation Between the D and the K Any D that works If you can find matrices that satisfy these conditions You're in good shape I think you can probably find a continuous family Because there's no restriction on the entries And if you count the dimensions It probably works And if you take the extremely tedious case That D is equal to one, for example Then you can actually write Bernoulli measures like this But that's not so difficult Since there are only one by one matrices I don't think it's necessary Unless somebody points out You need it for these conditions to hold As far as I know I only require these conditions Okay And so you can just cook up these things For whatever reason And then exactly the same method of proof Gives you a similar result Which is that we have a shift invariant measure On, in this case, a full shift on K symbols And it has the property That it's a Godic and moreover It's a strong mixing Which is better than a Godic And moreover, it's exponentially mixing When you look at Lipschitz functions For example, it'll hold a continuous function It doesn't make any difference And in this case The alpha is more mysterious I don't know explicitly what it is In any other... In many other given cases The general argument doesn't give you Explicitly what alpha is In terms of the matrices But in special cases you can actually compute it And that was what it did And so I do know what's on the next four slides So What I would like to do is to skip through The next three slides For lunch This is the original definition Of the Kazulka measure Ignore all these slides You have to look at harmonic measures You have to look at harmonic functions Keep going and let me thank you For coming to the talk