 So, by popular request tomorrow I will not give a lecture, instead we will do some problems exercises here some I will assign and some you will tell me you know I did not understand this then I can explain that part. So, that is what you have to do tomorrow you have to come up prepared with whatever difficulties you have then you can ask them else I will make up some problems and sort of some exercises which I will show you how to do. So, that is for tomorrow the last day Thursday I will give a different lecture which will be sort of more based on slides than on blackboard because some pictures were needed and so it was necessary to use the power point presentation. So, yesterday we had discussed the distributed processes model and this one is also called the Labelian Networks there is some nice papers by Levin which you can read up and learn about the stuff you know some more different perspective and different questions ok. Anyway, so we discussed the general problem that there are these computers which are sitting there and then you send in a message it processes in some way and send some messages to other computers which then process it in their own way and the internal state of the computers keeps on changing according to some fixed rules whatever the rules of the program and you look at the overall activity and once it dies then you send some other message and then we said that can we describe it in some way. And so for a general network it will appear that if you send in a message sometime there is a very little activity sometime one message will give rise to a huge amount of activity. So, it is like avalanche in sand piles and we can try to understand what happens in these kinds of systems of course, the details of what happens depends on what kind of messages you are sending and what kind of processing is being done. And so that is left for you know for particular applications, but the general idea is that the overall structure will continue to work and it will perhaps be useful to have whatever general theory we develop will continue to be useful and anything else you will need you will have to put in specific to the particular application you have in mind. But we discussed some special sort of lattice model models of these networks. So, we discussed this Eulerian Walker's model and we said that there is a lattice on each side there is an arrow pointing to one of the neighbors and the rule is that there is one particular walker when he comes to a site he looks at the where is the arrow pointing rotates it by 90 degrees in the positive direction let us say and goes to the wherever it now it points in the new direction it goes takes one step and then goes to a new site there also there is an arrow he rotate it by 90 degrees goes there and so on. So, what happens to this walker eventually the walker will perhaps leave the lattice then you add another walker and then it walks until it leaves the lattice and so on. So, this is sort of a simple model in which a walker modifies the medium in which it goes and then the medium affects the subsequent motion of the same walker or of the another walker which has been introduced. And it introduces some long range correlations in the directions of arrows in the medium in the steady state which you can try to understand and in particular we discussed one. So, this can be generalized to any graph. So, you have some graph at each vertex there is an arrow which points to one of the neighbors and the rule is that you rotate it to the next neighbor at each side there is a list of ordered list of neighbors and then you say now it was in neighbor number 5 I go to a number number 6 neighbor number 7 like that and it cyclically it goes on and on. Then on any such graph with any initial orientation of arrows you can show that if you start a walker and let it walk eventually it goes into a cycle and that cycle is very short it is only length equal to 2 times the number of links in the graph and in the cycle each link will be traversed once in each direction. And so what happens is that initially there is some disordered arrangement of arrows and as the walker moves it keeps on reorganizing the medium. So, that eventually it forms makes up this circuit which is a closed circuit of link to and where it is a some generalized Hamiltonian circuit in which every bond is visited exactly once in each direction. So, what else can I do about this one? So, there is one result which we did not mention last time which I want to mention today. So, suppose I take this Eulerian walker model on the square lattice and I start in the walker at the origin and all the bonds are just put in it random wherever you want and now this walker starts walking. Then as it walks around it likes to have all the arrows in some particular order. So, that it can go into a cycle and it does not want to keep on coming back to the same site again and again, but of course the arrows are set in random. So, they are not in this way. So, it is reorganizing the medium in some way. So, what it does is that it walks like this I will just draw schematically. So, at any time t it has reached some sites and it has not reached some other sites. So, the sites it has reached typically what it does is that it organizes it into a local Euler circuit in which each site is visited exactly once. If you just keep on following it it will not come back to itself again until every site every bond has been visited once and then it will do it again, but you know when it comes out then of course, there are some bonds on the outside which it encounters which it might have to go there and then it encounters a new thing. So, the region it explores keeps on increasing and as it explores more region it organizes it also into a local it adds those things into the local tour and keeps on adding like this. So, if you watch the movie of this stuff at some time t and then say in time t it has visited these many sites. Suppose these are in a radius r. So, how many times did it visit any site in the previous number of visits to some other site in the last pi r squared steps 4 pi r squared steps ok pi r squared is the area and 4 pi r squared is you know the number of bonds multiplied by two directions. So, I just ask how often you said we said any other site in this region the answer is mostly it is 4 every site is visited 4 times inside at the boundary there is a little bit of flay. So, it is locally organizing the stuff into a local Eulerian kind of circuit and then there is some stuff at the boundary these sites are not visited. So, some mixture, but it keeps on growing ok and so this r of t. So, after so in the next pi r squared steps it visits all these things once or 4 times and then in the process add something to the boundary because once it visits a site 4 times it has to increase the radius by 1 ok. So, I get this let us write this equation d r by d t is equal to 1 upon r squared this equation is asymptotically exact given the result that you know each time it has formed a circuit then in the next r squared of pi r squared steps it visits everything 4 times and then in the process the radius will increase by order 1. So, d r by d t is 1 by r squared and that implies that r goes as t to the power 1 third. So, the radius will keep on increasing yes. So, I do not actually understand why it happens, but that is what happens that is what happens for arbitrary graphs is that the system the arrows get organized into a local Eulerian circuit ok. So, here inside the arrows have been modified. So, that the kind of form let us do I think I can try to explain it like this suppose I do this in 1 d and the arrows are locally like this it takes time, but let me still try to do it. So, what will happen I start here then this is the arrow is like this. So, it is supposed to take us rotate it which means put it here and then go one step here. So, it will sorry and there are two I did not I did not do this very well. So, here this arrow is pointing this way locally and this arrow is pointing this way and this arrow is pointing this way. So, let us not look further. So, I will draw this better. So, this arrow is pointing this way and this arrow is pointing this way. So, now when it starts it rotates this and goes like this and comes here. Now, this arrow is pointing this way. So, it flips it and so the arrow is now pointing this way and it comes here, but this arrow is pointing this way. So, it flips it and goes like this then this arrow is pointing this way. So, it goes like that. So, if the arrows are pointing inward then it keeps on going right, but if they are going if they are already right then it will turn around when it turns around then it will encounter all the arrows which are pointing this way. So, it flips them and comes back all the way here and then at some point it will reverse the direction when it encounters bad arrow, but it goes and then it goes a little bit further like that. So, in between all the arrows have been made parallel and at the boundary it keeps on adding new arrows to make them parallel and keeps on growing like this. This picture is a two dimensional version of this picture. No, all one third are not equal to each other they are equal, but they are not for the same reason there is no connection no obvious connection. In fact, if you can find a connection you know like you can calculate this exponent the turbulence exponent in some way you get one third. I would just say that you know one third or third, but it does not have to do with this particular model ok. Yes. Sir, you said that the nodes that have been visited are have been visited four times. Within finite you know if you look in a finite time such that all these sites have been visited you give the system enough time. So, that it could have visited all these sites then you will find that it actually does visit all these sites equally. Four times. Yeah twice. No, no, no I said in the last R squared steps it visits it four times before that it visits. So, in fact if you look at the total number of visits it kind of looks like this number of visits is maximum in the middle and sort of the path the walker takes is sort of it goes through everything, but increases the radius by little bit and then goes through everything increases the radius by little bit and so on ok. The path is not as trivial a path as I have shown it depends on the details of the circuit, but it shows the property of self organization. You start with a very random network of bonds, but it is going to be modified in some suitable way. So, it goes into some particular structure which is non trivial and it is not identical every time you do it, but there are some properties which remain the same. Sorry. Yes. Why is it? So, ok so very good. So, I said that R at t plus 4 R squared is more or less equal to R at t plus order 1 this is the argument this one said in words I did not actually write down an equation, but if you accept this equation and then you write R of t plus R dot into R squared equal to R plus 1 then you get this equation. Now, where did I get this equation? This was by this long argument saying that look the system organizes its existing region we stayed into a Euler circuit and so it whenever you put it somewhere it actually tries to visit all the sites which have been organized it takes R squared time, but in that time it increases the radius by order 1. So, you say that increasing to increase by order 1 you have to visit all the sites. Once more. Yeah that is what is the system does because you know that is the way we have defined it on the it is a yeah it is just a these rules it give you this motion if you had some other rules you will perhaps get some other motion. Yes. No. So, yeah there is a proof that it goes into a cycle, but all the steps in the proof are not so rigorously established yeah. So, you can give a stronger nicer proof right now I will not try to do it. So, next thing oh this result can be generalized to other graphs and indeed dimensions and so on which I will also not do here it is given in the reference. So, there is a large class of models where one says that at each toppling the way the toppling occurs is not fully fixed exactly, but there is a probabilistic toppling. So, sometimes it does this sometimes it does that and you specify the probability which things happen. So, this was originally called some rise pile model they said that there is some rise and you draw it and throw it instead of grains of sand you make grains of rice and then they sometimes stick and sometimes they move and the probability that they stick is a little bit different than the probability they move. And so, you make a model in which with some probability the thing sticks with some probability it moves if it is sticks then the critical height is a little bit bigger then you know whatever you make your details, but basic ingredient is that the toppling is stochastic and not deterministic different topplings are independent events, but each of them is what happens in the toppling is obtained by a random number you do a random pull out a random number based on that you decide what happened. So, all these stochastic sand pile models can be considered as special examples of a billion models because you just have to say at each point there is a stack of instructions this instruction says that it gives you the toppling rule. You say that at this site next time the toppling rule is if the height is more than 5 then send two particles to north one particle to east. Once this toppling occurs this thing is erased and the next instruction is read which may be somewhat different these instructions could be randomly placed. So, we are imagining that the infinite list of instructions is already given at each site fixed. So, now the pile is actually evolving according to deterministic dynamics and then this old argument we used continues to work and the process will be a billion different topplings will commute and everything will go through. And this result is not immediately obvious if you do not do it this way. So, for example, we said that the basic point is that there is a pile you add at site i and topple and add at site j and topple and if you do them in a different order it does not matter, but if you have a stochastic sand pile and you add here and something happens if you add here something happens you add them in different order if they are stochastic you cannot be sure that it gives you the same result right. But with our version of the stochasticity we say there is a buried deterministic instruction set inside each then the result will be the same and now I go out and say oh, but I do not know how the system is generating all this list inside I cannot see it. So, for me the evolution is effectively Markovian and stochastic and so even in the stochastic Markovian model you will have the same abelian property. Now, the abelian property means that if you add by a i on c you get configuration c prime with probability of c prime given c prime with coefficient probability of c prime given c and you know point of addition. So, these operators are still matrices, but now they are not 1 0 matrices they are some other matrices, but you can still write their evolution rules and you know the and so what will be the rule here. So, let us take something called the manham model in manham model one says that stable heights are 0 1 if h is bigger if the if z is bigger than equal to 2 then two particles leave each in a random direction. So, when I suppose there is a pile and I topple and so it has some configuration 0 1 I add here now I am going to throw out two particles. So, I pick the first particle and decide at random which way will it go for the second particle also I decide at random which way they will go. So, with some probability both of them will go north with some probability one of them will go north one of them will go west or whatever yes. Yeah this is sorry I should have I beg your pardon I should have written i here this is the configuration is c and I have added at i and what the probability that the final configuration is c prime. So, now for this model and so we will define these operators a i still they I will still commute with each other and they satisfy again some algebra what is the algebra now the algebra is that a i squared is equal to a i 1 plus a i 2 this is the operator equations and then we said that oh well these operators can be simultaneously diagonalized. So, these are operators, but if they can be simultaneously diagonalized I can write their eigenvalues and the eigenvalues will satisfy some equation. So, if a i acting on some phi 1 phi 2 phi n is equal to i hat a i acting I do not have to write phi 1 phi 2 phi n I just write phi. So, these are operators these are eigenvalues then these eigenvalues satisfy the same equation a i squared is equal to a i 1 plus a i 2 oh I had this picture i i 1 i 2 i 3 i 4 a i 3 plus a i 4 square by 4. So, now these are n equations i equal to 1 to n and these are coupled quadratic equations in n variables. So, I want to solve these coupled quadratic equations in n variables. So, that is generally a hard problem coupled polynomial equations are not trivial to solve the usual rule in mathematics text books algebra text books is that you can eliminate variables. You know if I have some coupled equations p 1 x y equal to 0 p 2 x y equal to 0 then you can eliminate y between these these are some polynomials in two variables you can get some other equation with p 3 x equal to 0. This is a much higher order equation than these, but you can actually always write you can convert these two equations eliminate y fully and get a equation in only the x variable I will not give a proof for this result this is sort of you can figure it out yourself you know enough algebra to take some coupled polynomial equations and see if you can eliminate one of the variables and write the equation only for the second one. Yes, there was a question yes yeah this is this one is a square and the right now this is an example this is a particular example of a more you know you can do this with other models, but in this model the left hand side is a i square I can make a model in which with some probability p 1 it goes north with some p 2 it goes east and p 1 is not equal to p 2 and you know I will write some similar equations for that one and I will write that one. So, this one for coupled quadratic equations it is hard, but this equation is nice and easy because I can solve it it says a i is equal to plus minus a i 1 plus a i 2 plus a i 3 plus a i 4 by 4 this new equation is a linear equation. So, it is much better I know how to solve linear equations ok. So, what happens is that you write as a i and a i equal to plus minus 1. So, for each choice of a i equal to plus minus 1 you will get n coupled equations you can solve them that gives you one solution and for the 2 to the power n choices of eta i you get 2 to the power n distinct solutions. Then the size of my matrix was just 2 to the power n y 2 to the power n. So, then I have generated all the solutions to my algebraic equation yes please. So, what I understood there is that you make these operators as operators working on the space you cannot just square root everything. But here I guess that the trick is that you are not applying them over a reference space, but it is just a combination of that. Right now these are just complex numbers a i are Eigen values and I am trying to solve the Eigen values. I will have to go back and see what is the corresponding vector. Right now a i has a complex numbers all the vectors are no no all the Eigen vectors are not real states of the system in the standard sense. They are just Eigen vectors of the evolution operator ok. We will there is there is one state which is the steady state and there are 2 to the power n minus 1 other states and they know all of them are some crazy states. They are connected to the steady state by perturbation if you take the steady state and perturb it in some way then you go to a non steady state, but which may decay into a steady state again ok. So, just side tracking not back tracking. So, in a just I will start with some steady state then I will perturb at one side i 1 at time p 1 and then relax the system and then I will perturb it at some i 2 at time t plus 1 and like that and then I ask for some properties of the system. So, these will be time dependent correlation functions in the steady state, but the time dependent correlation functions depend on the full matrix structure of a because they connect the steady state to other states. The my perturbations are not always just a they could be other perturbations right. I may measure the height at some site that corresponds to an operator which is you know apply to that state it gives me the height at site i z i is an operator, but if I apply z i and then evolve it I do not stay in the steady state sector I connected to other states. So, if I want to measure height-height correlation auto correlation function in the steady state then I will have to go outside the steady state sector then I then I need to know all the eigenvectors and eigenvalues of a just the steady state is not enough and that is why this algebra is useful in general. Even when you are studying a model where you are not able to solve for the algebra it is good to realize that there is a all these eigenvectors I got to actually determine them, but maybe I can approximate them or do something. So, the fact that there is a matrix structure for these operators which can be handled in some cases gives you a feeling about these operators which can be useful in more general settings. So, that is what I would say. So, all right. So, if so 2 to the power n choices of eta i give me 2 to the power n eigenvalues. So, that looks fine and those are the eigenvalues of the yes just one minute yes. Can't see can't see now the last one is still not visible ok I will write 2 to the power n choices of eta i give me 2 to the power n sets of eigenvalues a 1 a 2 a n ok. So, that sounds of course, you know this is very formal I said go and solve it, but who is going to actually solve even these coupled equations last time we had an equation a i equal to just no eta just nearest neighbor differences that was the Laplace's equation. So, we try to solve it it was a little bit of hardware and it was not all clear and easy how to actually get the solution. Now, you have put in all these eta's and they are somewhere it is plus somewhere it is minus then it is going to be more complicated ok. So, how it is while these are linear equations are they easy to solve ok. So, then turns out that sometimes if you cannot solve a problem, but you can realize that this problem was also studied by some other people then you can look up the literature and see what they know about it. And it may it may not always help in solving the problem, but it helps you understand the one. So, this one looks like del squared a equal to eta. So, this looks like the eigenvalue equation for a Schrodinger operator in a random potential because this is del squared and this is v and v is the potential it takes values plus minus 1 and the corresponding energy is 0. So, this is Anderson model Anderson localization problem. So, you know something about the eigenvalue structure or eigenvectors of the Anderson model then you can use that knowledge to solve this one or understand this one a little bit better or vice versa or you can say O and I wanted to understand the properties of the Anderson model near E equal to 0 in my units. However, Anderson wanted to study it as a function of E. So, this is sort of the solution of the Anderson model at the band edge and so the connection of this problem to the Anderson problem and how much the solution of the Anderson model or the localization problem tells you about the mana model. If I can solve the Anderson model I have solved the localization problem can I determine the properties of the mana model that still remains an unsolved problem. So, at this level the connection is established, but at the next level using the connection to get some non trivial result about the either the Anderson model or the senpile model still remains non trivial because there were some results which we deduced about the senpile model. For example, we said that you know I do not know the details of the sen mana model, but expectation value of S will go as L squared that was a proof we gave early in the beginning the mean number of top links per added particle will go as L squared because it is a diffusive process. So, what does it say about the Anderson model there are no top links in the Anderson model right, but something or the other there must be an equivalent statement about the eigenvectors eigenvalues in the Anderson model. So, what is it? So, figuring out this correspondence is non trivial it has not been done in great detail and I will leave it to you as an exercise. No it has not been done. So, it is a good exercise you can write a PhD thesis based on that work. So, I just want to mention one result about recurrent Forbidden sub configurations in the mana model. So, I start with this mana model we defined already there are a maximum height is 1 and if you add two particles each particle goes in a random direction. Then what is the set of recurrent states that is all we ask. So, what is the answer? Can we give the answer? The answer is actually turns out straight but I will give the answer because you know you can use this answer to develop other answers this you can determine yourself, but it will take you some 10, 15 minutes may be one day. So, all states are recurrent all stable states proof is that if you start with some configuration of the mana model. So, there are ones and there are other places which are zeros from this configuration with finite probability non-zero probability you can go to a state which is all empty. So, this all empty state is reachable from any given configuration we will prove this, but once you have reached the empty state you can reach any other configuration from there by just adding. So, all states can be reached from all other states that is the proof. How do I ensure that I can reach the empty state from any given configuration? Well I do it like this I say that here is a one here, but I want a zero. So, I add a particle here and when I am throwing which I just throw them both up I am choosing. So, with finite probability this will happen once it happens then I mean this some state like this. So, all the I can empty out the bottom row like this because I just throw away particles from there up and then I throw empty out the next layer and then I empty out the next layer and eventually I empty out everything end of proof no. So, the proof is technically very straight forward, but you have to realize that oh I can do this you know I can go like this I can prove that from any configuration you can reach the all empty state with some finite probability it is reachable and once it is reachable then all other states are reachable and all states are recurrent, but now they are not recurrent with equal probability. So, that is the difference yes please they are reachable, but no there is no detail value yes please no because some of these Eigen values are actually zero. When you work out even for a four by four board or some such thing you will realize four by four board is not so trivial there are sixteen Eigen values. So, you sit down and work them out and you will realize that some of them are zero it is clear that a i equal to zero solve this equation. So, now what does it mean it turns out that if the Eigen value is zero it says something about the is and it says some applied on some state it gives you zero which means that the state is a transient state, but it need not be a exact configuration it may be some difference between you know it may be a vector like c 1 minus c 2. So, if you apply a on this you get zero I do not know that state is not a physical state it some state things will occur with positive weight some things will occur with negative weight, but the Eigen there is an Eigen vector like this, but then I realize O, but the matrices A are not always diagonalizable they are only upper triangular or they have only the Jordan canonical form. So, even though Eigen values are there Eigen vectors need not be there is this point well known to every I am sure it is known, but is it appreciated by everybody I have a vector. So, sorry I have a matrix this one one one one zero actually zero zero zero all the Eigen values of this matrix are zero, but there is only one Eigen vector which is zero and there are not two Eigen vectors right. So, there is this notion of generalized Eigen vector in physics quite often people work with symmetric matrices or Hermitian matrices where this problem does not arise once the matrix is diagonal it is once the matrix is in the Eigen vector basis it is in the upper triangle form, but it is also diagonal, but here you typically encounter cases where the matrices are not diagonalizable fully. They are not enough Eigen vectors in the first place there are Eigen values, but there are no Eigen vectors corresponding to this Eigen values yes you could do some such we know what the basis we know the space we started with right. So, if I take all these vector span this subspace all the orthogonal subspace is there whatever it is I can look at it I can you know that is basis independent, but then I can choose whichever basis I like to describe the set of states. No that we have already shown without solving those two to the power n equations. No the property of recurrence only says that the steady state with all a i equal to 1 has an Eigen vector right that Eigen vector has all entries 0 or non 0 that is the question the steady state being all states are recurrent is only a question about one Eigen vector all the other Eigen vectors are totally different question ok very good. So, in this case it was simple it turns out that the probabilities of configurations are much more different in the MANA model than in the deterministic topling-senpile model there all things were equally likely here the ratios of probabilities of different configurations can be 10 to the power 10 or whatever some very big number can be exponential in n or exponential in square root n or some such thing yeah it depends on the details of the model even in 1 d one can try to solve this kind of the 1 d MANA model that is already moderately non trivial or non trivial because in the steady state is not yet worked out ok yes because the matrices are not diagonalizable, but they have always have characteristic polynomial and I can look at the roots of the characteristic polynomial. So, the Eigen values always exist as the roots of the characteristic polynomial they correspond to generalized Eigen vectors or generalized Eigen values sometimes they are not Eigen values, but they are generalized Eigen values yeah. So, in the Jordan canonical form the problem only occurs if you have repeated Eigen values. So, here the Eigen value 0 is actually repeated quite a lot, but then I have to find out exactly how many times it is repeated and you know what is the space of null Eigen vectors or some such thing and that is a non trivial question still even for this trivial 1 dimensional model the 2 d Anderson localization is hard 1 d Anderson model is perhaps easier well not still fully solved yet and so on ok. So, so let us change so let us change the MANA model a little bit MANA model the rule is still on the square lattice for the moment, but the rules are now that critical height is still 2 actually we said it is 1 on topling at a site you can either go like this or like this you send 2 particles out they go either up or no up down or left right no other choices are allowed sorry ah with one half yeah each with probability half now I can ask the same question again I can ask that can I what are the are all states recurrent in this model. So, the answer is no are there forbidden sub configurations and the answer is yes. So, let us see how does that work. So, let me give the result first and we will prove it is a forbidden we called it FSC forbidden sub configuration truth suppose this is not there suppose there is at least one side particles somewhere here now you want to get read if you topple somewhere else you can only add stuff inside that is not going to work I want to get rid of this one here. So, I add a particle here in topple it and then hope to get out, but then if you add one here then either you will add something here or add something there. So, you will never be get rid of the one in a four. So, you will never get all four zeros that is the proof it is just a sort of extension of the proof we used earlier, but only one small bit was added to the previous proof right because the model has two possible topplings allowed then you have to say under none of those choices I would be able to get rid of this one. So, then I can ask are there any other FSCs. So, the answer is yes if you work harder you will find that there is also this stuff 0 0 0 0 1 0 0 0 proof is the same you know extension of the previous one if this one was 0 then of course, it was already forbidden we did cannot occur we know. So, now can you create something like this I guess the proof is that you cannot create extension of the previous proof by induction. So, then you know I then I can ask of course, what is the next forbidden sub configuration what is the next what is the next that is what we did last time and we produce a list we can add to this list you know this also an FSC this also an FSC this is also an FSC then we will ask that oh yeah, but is there a simple test to put all these FSCs together and get a burning test for this problem ok. So, it turns out that that is not possible yet since one should be able to do it, but it has not been done. So, finding all the possible recurrent states of the MANA model the answer is not known stochastic extension to the burning algorithm is not yet known. So, that is an interesting unsolved problem ok. So, if I were to pose this problem to a high school student the way it is done you say oh there is this 8 by 8 board chess board and you put particles 1 0 1 0 you know whatever you start with all ones you know that is the simplest starting configuration and then you can add anywhere and you can topple any which way you like either up down or left right and can you produce a state in which there are no particles. So, if the person is smart he will be able to prove that no no it is not possible we already gave the proof here. So, the high school student can come up with this proof. So, then I say that ok well what is the least number of particles which will have to be present in order for the state to be recurrent to be reachable from this state because the extension of the old argument we here we said that all empty is a reachable from any state and from this state you can go to any other state. So, now we say 1 1 1 is reachable from any state all ones because you just add fill the all the zeros you can add can reach there from there if you can reach any other state then you are done ok. So, what is the minimum number of pieces which will be left on board if you are allowed to keep on toppling and there is a 1000 now euros is to be the money 1000 rupees price for being able to solve this problem. What is the minimum number of pieces left on the board if you are allowed to topple any which way you like starting with all fill the board. I can make it 1000 euros no 1000 I do not want to pay that money sir ok. So, that was sort of just about the stochastic sandpiles then what I want to do in the remaining time is to do the structure of finite a billion groups which we did not do last time total number should be minimum total number of. I think the problem is different if you No the question is what is the minimum number prove that the minimum number is 28 ok. So, so this stuff is sort of moderately elementary for mathematicians because if you talk to a mathematician you say oh I am studying the sandpile model there is a group there it is a very nice a billion group. So, they say that oh a billion group they are trivial what is there to learn about a billion groups ok. So, what they are referring to is a theory oh sorry they say finite a billion groups are trivial ok. So, what they are referring to is some result like this it says if you have a finite a billion group of order n then the group is always of the form G is equal to Z d 1 cross Z d 2 cross Z where d 1 d 2 d r are positive integers greater than 1 and i is a multiple of d i minus plus 1. So, that is the result. So, I want to just explain what is this theorem and how it is useful for us in some way. So, firstly what is Z d 1 this is the very simple stuff you just take some operator a 1 and my algebra is that a 1 to the power d 1 equal to 1 ok. So, I can take the operator so the G as we will write it as Z d 1 is the set of operators forming the group is set of values a to the power r r lies from 0 to d 1 minus 1 because r to the power d is the name as r to the power 0. So, this group is called Z d 1 that is the name mathematicians have given it I know what this one is now you should take an operator just raise to all possible powers. Then if using this reduction rule all the powers can be restricted to be less than d 1 then you multiply to such operators you get another one in the same list and satisfies all the group properties and what else is there nothing much that is my a billion group that is the Z d 1 easily understood. What is Z d 1 cross Z t 2 Z r close Z s is a group with two generators a to the power Z r close Z s a to the power r equal to b to the power s equal to 1. So, the general element is a to the power r 1 b to the power s 1 with r 1 s 1 r 1 lies between 0 and and s 1 lies between 0 and s right. You make all such powers like this multiply them out and that forms a group and this is a product group it is a trivial there is no big deal and just making you realize that there is no big deal you do not have to be scared of the group theory because it is trivial. So, the Z 1 Z r cross Z s is a group which is trivial group in the sense that you take these operators you can multiply them you can always reduce the powers to less than r and s and then that set of elements forms a group under multiplication of these elements easy. But this is an interesting result. So, if you take a Z 3 cross Z 5 is the same as Z 15 because I can take an element which is called a times b and consider construct all powers of the single element a b and it will turn out that only the 15th power of this will become identity and all the previous 14 powers will not become identity and. So, you will get 15 different elements which work. So, this group actually has only one generator whose 15th power is the identity and so this is at 15. So, this method can be done more further and you can take two numbers like this and find a simple group with lower number of generators. But there is also this another result which is also trivial to see Z 2 cross Z 2 is not equal to Z 4 you say that if you take a and b and a squared equal to b squared equal to 1 you get a group with four elements. But that group is not Z 4 because there is no element whose raise to the power 4 gives you the identity every element you take in that group this is just. So, this group is made up of four elements 1 a b a b and square of everything is 1 right then there is I cannot write it as Z 4 is different. So, this one says that you can always given a finite group you can always write it in this form that did not seem to me very very surprising. But it says in addition d 1 can be a multiple of d 2 chosen to be a multiple of d 2 and d 2 will be a multiple of d 3 and d 3 will be a multiple of d 4 and that is an extension of this argument. If there are some co prime numbers then you can choose the g c d of the two numbers as a bigger number and l c d as a other number and you know g of let me write that result down and I will not prove it left as an exercise Z r cross Z s is equal to Z l c m of r s cross Z g c d of r s that is a nice result you know that is an exercise you can prove yourself do not read it anywhere you will be able to prove it by yourself. And it is an interesting non-trivial result which will not strike you immediately if I did not write down this equation on the board you may not have realized it, but once written then the proof is automatic I will not go through it ok. So, this one we want to prove actually in our case and how can we use this in my problem. So, we will need another quantity which is important that is the reason for going through all this is called smith normal form of an integer matrix. So, you give me any integer matrix delta delta is just an integer matrix and the smith normal form says any matrix delta can be written in the form this is n by n matrix no and a d b are where a b are n by n matrices I think the word is unimodular. Unimodular means their determinant is plus minus 1 then their inverses exist for integer matrices if I am not allowed to work with fractions their inverses do not always exist. Because if the determinant is a big number then I do not know how to do the inverse ok, but if the determinant is 1 then the inverses are easy to find that the reason for this s u s not u, but s something or the other with s l n because the subset where they are unitary unimodular matrices is a subgroup of the full thing it is an interesting group. So, a b are unimodular and d is a diagonal with entries d 1 d 2 d 3 1 1 1 with d 1 is a multiple of d 2 2 is a multiple of d 3 and so on. So, it is a mouthful the statement is quite long ok, but the proof is straight out. So, if I take any matrix integer matrix I define an equivalence relation between matrices a and b, a is similar to b if can be if a can be changed to b by row column exchange or by adding one row to another ok. This kind of notion is familiar to you actually I am sure you have seen it before. You take a matrix and I can just exchange two rows that becomes another matrix which is equivalent to the previous one by my definition and I can exchange two columns and the new matrix is equivalent to the old one. And I can add a row to the previous one one of the rows to another one one time or two time or three times or subtract from previous one and that is another matrix which is equivalent to the previous one. You are familiar with this in the context of evaluating determinants because the determinant of matrix a is the same as the determinant of matrix b, but we are actually having a slightly more extended notion which is not only that the determinants are equal, but the matrices are equal in some generalized sense ok. So, very good. So, this is clear. So, you give me the matrix delta equal to this matrix I can just look at all the entries and look at the diagonal look at the lowest nonzero look at the lowest entry in the nonzero entry in the whole matrix and bring it to the bottom diagonal by row and column exchange. A row exchange is implemented by multiplying the matrix by a matrix on the left ok. If a and you apply to it a matrix one one sorry one one one everywhere, but one one like this row and you know two rows in R exchange you multiply on the left then will give you in a transpose matrix sorry in which two rows are exchanged. If you apply this matrix on the right it will give you two columns which are exchanged ok. So, this is really working my I will keep track of what I have multiplied by my multiplying by a and b on the right by look in the middle. So, all matrices can be reduced to this form where there is some entry here then I will just subtract this is a small entry. So, I will subtract it from everything here and bring these entries to 0. If they are not 0 because I am only allowed to use integers right I cannot use fractions suppose I this is 7 and this is 3 then I just exchange the 3 and bring it down and then reduce the 7 to 1 ok. Now, I will bring the 1 down and bring a change the 7 to 0 3 to 0 ok is the proof clear everything was not written down, but everything was sort of rather obvious argument. This is what you do in solving linear equations you convert an equation into a form in which there is a diagonal and everything else is 0. And we are saying that you can do the same thing here yes yes. So, it is up to plus minus 1 that is kept track of b the matrix will have a determinant negative determinant and this will also have a determinant. So, the matrix the z is of the form a d b where these are unimodular matrices and the unimodular matrices keep track of whatever product and addition operators we are doing ok. So, my matrix finally, will look like this matrix, but I had obtained it by one subtracting from here from here. So, I will write multiplied by the inverse operation 1 1 like this on the right right and so on ok very good. So, the hardest part is just this that so everything can be reduced to 1 in the diagonal except it so happens that all the matrix elements are multiples of something called b. Then in the lower diagonal you get b and so then if you so I can write a big matrix first I make this one and then there is a smaller matrix here then I make this one and there is a smaller matrix here then I make this one and there is a matrix, but now all the elements are multiples of 2. Then this matrix will be of the lowest value I can get this 2 is 2 and then like this and then it goes on and that is how you get d 1 d 2 d 3 all the diagonals are multiples of the lower values it is a powerful result I mean it is not taught in class because in physics people just deal with diagonalizable matrices and that you know this one is working with integers I am not allowed to use matrices which are non integer matrices. So, I cannot always use a inverse right so only working with integer matrices this is the best you can do and this is good enough for our purpose. So, why is it good enough for our purpose? So, if I take a matrix last time we got something I do not remember there was some delta matrix for this stuff and I worked out the determinant and it turned out to be 192. So, 192 can be written as 16 into 12 is equal to 2 to 7 into 3. So, what are the corresponding d 1 and d 2 which are possible? So, I can have some values of d 1 and d 2 and they should be such that the product of them is this number right. So, I have only this process the 3 should only occur here because 3 occurs here then d 1 will not be a multiple of d 2. So, 3 should occur here and 2 to the power 7 should be divided between these, but this power should be lower than this power. So, will be 3 into 5 2 to the power 5 cross 2 or 3 into 2 to the power 7 or 3 into 2 to the power 4 cross 2 to the power 3 3 into 2 to the power 4 cross 2 square cross 2 that is the only possibility there are not too many choices. So, the fact that you are given the determinant as an integer it has a factorization. The factorization can be written in this form in only a small number of ways. So, it gives you a choice about d 1 d 2 d 3 are more or less fully determined by the order of the group you do not have much choice left. Of course, they were supposed to be the fact you know if you multiply these you should get the order that kind of determines the number, but the fact that they have to be multiple determine it puts much more constraints. So, now, so we are doing well. So, now, if I have somehow determined these matrices A and B given delta I can determine A and B are not unique for a given delta you will only get that matrix no other. So, then let me consider. So, this group now let me consider an operator called E alpha which is by definition. So, now let me write E alpha is equal to product over j A j to the power alpha j yes A and B are not unique. Oh, I beg your pardon I wrote it wrong are not unique, but D is unique. So, gives you this then clearly E alpha to the power d alpha is equal to identity. So, what did I do I multiply raise it to power d. So, A j to the power d B alpha j product over j this is equal to this because d alpha multiplied to the power B gives you d B alpha because d is a diagonal matrix by construction, but d B is equal to A inverse delta. So, this is the same as product over j A j A inverse delta alpha j, but A j to the power delta j is equal to 1 that was my equation. So, this is one. So, this E alpha which we have constructed explicitly in terms of this B these an integer matrix. So, I know these numbers these is are actually the generators of my group z d alpha you raise it to power d alpha it becomes identity and different alphas will give you different vectors and. So, we have generated the group and we have generated the simple generators of this group what good is that what is the use of all this. So, in terms of these groups the point about all this is is to get a nice characterization of the set of recurrent configurations. So, we said there is this set of recurrent configuration it was kind of messy because there were some states which were forbidden and there were no easy coordinates for this set. Now we can define coordinate along the torus. So, for each configuration C is defined by what shall we call it r 1 r 2 5 1 5 2 5 is the pad number r 1 r 2 r 3 is good for me where this one r 1 takes values between 0 and d 1 and r 2 takes values between 0 and d 2 and all possible values there are no constraints. So, all possible values of all values of e 1 to the power r 1 e 2 to the power r 2 e 3 to the power r 3 gives you all possible group elements and for each of these there is a unique configuration. So, I have a unique configuration a unique label for each of these configurations recurrent configurations and now this sums over r's can be done easily because they just co uniformly from 0 to r max. Yes sorry I beg your pardon no no no they should be same it is similar it takes d 2 values and you can choose them to be 0 to d minus 1 or you can take them to be 1 to d whichever way you like. So, very good. So, this gives me the characterization I think I only need to do one more thing about this and then we are kind of through for today. So, there is a very nice notion called topling invariant. So, we had this matrix delta. Sorry. Yes. So, there is a torus which has this d 1 in one direction d 2 in the other d 3 in the other. So, r 1 is the coordinate along this direction r 2 is the coordinate along this direction r 3 is the coordinate along this direction and r r is the last one. And so, r 1 takes values 0 to d 1 and r 2 takes value and these are just torus coordinates. And so, they are very easy to write. So, in principle if you want to sum over the recurrent configurations you can just do this summation in terms of coordinates r and they are easier than the other ones. There is a matrix delta and so, you can define delta inverse i j and z g summation over g is equal to i i. You give me any configuration of the senpile z and I will define a number called i i for this configuration which is a linear function of all the z's in the problem. It is a weighted linear combination I make this way i i is a linear combination of all the z's in the configuration. For each configuration you define this number let us define you can define i 1 and then we will define i 2 and so, there are n of these numbers actually I define n of these numbers ok. And then I do the following I take this configuration z and I topple it at any site I like I get a new configuration z bra and I look at the value of i what happens to the i when I change the configuration though I will change how much will the i change that is the question that is not a hard exercise know all of you can do it in your head or on the piece of paper in front of you. So, what is delta i i is equal to i i of z prime minus i i of z prime minus i i of z how much does the i coordinate change this quantity i i i defined how much does it change so, the I know how much the z change because z prime i was equal to z i minus delta i j sorry z j on toppling at i z j change like this. So, I topple that side j and all the z's will change by this so, the delta i will be equal to summation over j delta inverse whatever times delta z j. So, this delta z j is this matrix delta so, it only changes by integers is equal to always an integer. So, the elements of i i were these elements were actually fractions because delta inverse is a fraction it is not an integer, but when I calculate this sum and I change it it is always one and so, the result which is straight forward derived is that i i z by definition let us write summation over j delta inverse i j z j and we will add mod 1 is equal to constant. So, if I topple at a site this particular quantity does not change mod 1 so, the fractional part of it does not change the full number may change, but its fractional part does not change at all. So, that is an invariant under toppling and I have got a large number of these invariants for arbitrary matrix delta. So, so these toppling invariants are very useful in understanding the structure of this problem so, then I have to ask I have got this n toppling invariant what do I do with them firstly n is a very large number in this many problems. So, you have lot of invariants then you should ask are they all independent or are they are not independent so, how many independent toppling invariants can you construct. So, suppose I have a quantity i 1 mod 3 it is a invariant and i 2 mod 5 is also an invariant then i 1 i 2 is a quantity oh i 1 then i 1 plus i 2 is invariant i 1 plus i 2 5 i 1 plus 3 i 2 mod 15 because i 1 is invariant mod 3. So, 5 i 1 is invariant mod 15 and i 2 is invariant mod 5 then 3 i 2 is invariant mod 15 and this quantity is invariant mod 15, but it is a single number, but actually it is good enough for me because it takes 14 15 different values and they can be used to separate different classes equivalent classes. So, this is a single invariant which is equivalent to these two invariants. So, these two invariants are in some sense independent, but they are collapsible into a single invariant which is this mod 15 invariant and you can do this for the sand pile in general and all those n invariants which we constructed can be reduced to just r invariants each corresponding to one of these integers. So, I will stop here as you can see there is some interesting group structure which emerges, but it turns out that the utility of this group structure in solving the kinds of problems we were interested in has not been so large. So, one can be a pure mathematician and go off in the tangent and study these problems at great depth and I actually had a friend professor D N Verma whose name was mentioned by Shahin in his lectures in terms of these Verma modules. Who was a mathematician and he used to get very excited about oh there is a group and this thing and I would say oh yeah so that like that. So, different people have different interests and, but I think the problems are interesting looked at different perspectives and I am only pointing out that you know the group structure is nice and interesting and some of this stuff maybe you could have learnt and you did not learn before like this with normal form and so on are useful for you to know and maybe they will be useful later that is a application of the Shankar Theorem how many of you remember the Shankar Theorem everybody did not come for the yesterday's video no nobody came some came ok. So, you know it is something which does not appear to be useful today may be useful tomorrow that is a general. Ok, let me stop here.