 So, let us start. So, first thing we want to discuss is the format of the exam. So, after some initial consultation with a small sample of U, I sort of came upon this mode which is that the exam should be closed notes and not open notes. You know it is not fixed, I am making a suggestion the exam can be either closed notes like you are not allowed to consult anything or it can be open notes like you are allowed to consult your notes. Of course, the exam questions will become different if you have different choices. So, which one is preferred closed notes is simpler than open open is much tougher exam you know general closed notes very good. Then the duration 2 hours is ok or 3 hours is desirable that will depend on the time no no no 2 hours is good. So, 2 hours closed notes 2 hours is good I think because it is going to be a tough exam. So, you know you if you only make it 1 hour it will not work ok. No it will not be tough, but you know you should be allow yourself a little bit of time. So, 2 hours closed notes then there was a question should it be 5 questions out of 6 or 5 questions out of 5 or 6 questions 4 questions out of 5 3 out of 3 or I can do it like this I can assign you 6 questions and do as many as you like. Of course, the credit you get is proportional to how many you do is always the same. So, whether you say 4 out of 5 or 5 out of 5 does not make so much difference. So, which one do people prefer 5 out of 6 is what I like ok. I think you it gives you a little bit of choice, but it gives me a little bit of measure of how much you have learnt because you know I can also make it 1 out of 2 and that is not discriminating enough ok. So, it will be 5 out of 6 questions written closed notes 2 hours settled. Next thing some previous difficulties, previous comments re-explain one thing which seems to have bothered people sorry can you please be quiet. One question which seems to have bothered people is this problem about is the group over the set of recurrent states or is it over the set of operators and what is the connection between these or what is the relationship. So, for finite groups there is no real problem the group we define was actually over the set of operators they were these operators A i you can multiply them what you get is another operator and the set of operators was only finite because some higher powers of the operators reduced to smaller ones, but suppose you have a set of operators G i which form a closed set under multiplication and then I apply this G i on some vectors psi I get psi I ok. Now, if two different operators are applied to the same vector I guess you should get different answers because the inverse exist ok. So, it is clear that the number of vectors is the same as the number of operators ok. So, that should not cause. So, psi i are different psi i not equal to psi j if G i if i not equal to j and that then there is no problem with the numbers all right. So, in the recap so, we discussed the number of recurrent states is equal to the number of operators then we said the matrix tree theorem with. So, this number of states was equal to determinant of delta set of recurrent states, but we said the determinant of delta is also the equal to the number of spanning trees on the graph where the graph is understood as the graph of the lattice plus a sink site to which all the extra particles if there are any which are lost are added and then you construct spanning trees on this graph and that gives you the number of recurrent states. And then we pointed out that this spanning tree problem is also related to the if registered network problem and I did not give a full proof of the Kirchhoff theorem, but stated it that you know the effective resistance is equal to number of spanning trees on some graph divided by number of spanning trees on a different graph where the different graph is obtained by taking the old graph and collapsing two sites and just making them into one yes. Because the number there are a lot of states here these are my states one operator takes me to this one one operator takes me to this one one operator takes me to this one like that. So, for any starting reference state you can construct one operator for each of the other states. So, if I set the states. I guess it works for infinite except you have to be careful in counting. The infinite dimensional one is a little bit of a problem because you should be sure somehow you have not missed something here the point about missing something is much more obvious and trivial. So, now what we want to do is to make a more precise correspondence between the spanning trees and the recurrent configurations of a sand pile. So, one to one. So, I take some lattice we are always working with this sort of traditional example and this is too big I have a small lattice. Now, I will add these extra links and then this sink site outside and I draw a spanning tree on this bigger structure and then I draw a sand pile configuration. The number of possible sand pile configurations is equal to the number of spanning trees. So, it is possible to make a one to one correspondence between these equal size sets, but we would like to make the correspondence in a way which seems natural. So, is there a nice way to assign a tree to a configuration such that you give me the tree I will give you the corresponding sand pile configuration and vice versa. So, how do you do that? So, the answer comes using the burning algorithm. So, what we do is we start out with this lattice and let us say it has some heights 3, 3, 1, 2, 3, 0, 3. So, what I do is I apply the burning technique in parallel in one step. In the first step I scan all the lattice and burn all the sides which I can burn. Then I go back and do it again and then go back and do it again step by step. So, I come here in the first step this side can burn. So, I just burn it I imagine that the sink site is always burnt you know it is not there in my problem. So, sink site is burnt then I burn this one. So, the why say the fire from the sink we hit this and burn this one now it is burnt. This one will also get burnt fire will reach from here and go there and this one right now it does not burn, but this one can be burnt and this one cannot be burnt. So, rest up now in the next step this site can be burnt. So, the fire from here will reach there I draw a link where the fire goes and. So, there is a burning path for each site and in the end all the sites get burnt and what you get is a spanning tree. So, this is the spanning tree I attach to that configuration. So, it has two neighbors two unburnt neighbors and its height is three bigger than or equal to number of unburnt neighbors. So, it will be burnt and so on. So, the basic rule is that you draw the burning path burning path is a spanning tree and different configurations will give rise to different burning paths. So, they will give rise to different spanning trees, but there is a little bit of a problem. The problem is that this site I will burn whether the height is two or three that is a rule if the height should be greater than or equal to the number of unburnt neighbors. So, it is not clear that this mapping is one to one different height configurations can lead to the same burning path. So, that is a problem, but I have a way out the point is if this site is here it has two input directions in which it can come. So, one of them I will attach to two and one of them I will attach to three. So, I will have a rule which is n bigger than e bigger than s bigger than this is north east south west and it says that if the height at my site is the highest possible then burn it from the direction which is a amongst the burning path which is available you know you cannot burn from everywhere, but from wherever you can burn find out which one works and burn with using this one. If this one if the height is lower than this then go to the next and the number of choices you will have at each stage is always equal to the number of possible heights. So, now, I have a unique one to one correspondence between burning paths and spanning trees and the recurrent configuration because if the height is three I will burn using this one because north occurs first, but if the height is two I will burn like this is this clear the rule for making this rule you can choose whatever you like you can even make it different at different sides it does not matter the details do not come into play, but it is nice to make it uniform, but anyway once you have done it then the one to one correspondence between spanning trees and burning paths is there and for each spanning tree I have a unique assignment of heights. The rule is this I come let us go through this next one here I come to this side in the next iteration now it has only two unburnt neighbors right and so it will burn, but will also burn if the height is two or three right. So, which one, but it can also burn from two paths it can burn like this or it can burn like this. So, which arrow will I assign well I will assign the upper arrow because you know it the height is as much as possible. So, the first allowed rule is the one I will choose. So, I will not pick this path I will pick this path if the height was two here then I will choose this path and you can generalize no we choose a side which can be burnt and choose a direction along which it will burn and that direction has one or two or three possible choices and then I pick the choice the first one if the is the height is maximum possible in the second one if it is the next maximum possible third if it is the next maximum possible and so on there is only one choice then I just burn like that nothing to do sorry I beg your pardon this this is this cannot no this is only one choice allowed just once again yeah. So, this just once again yeah. So, here I will not use this one because this if it was at this side I am burning at the second time step I am not burning at the first time step. So, if so it this if it was yeah if it was three then I could have burnt it in the first time step yeah because now if the height is two it has two unburnt neighbors and then it can be burnt it can be burnt height should be bigger than or equal to number of unburnt neighbors ok yeah yeah yeah yeah. So, if the height is two yes and then there are it can come from here or here no no no so, it says allow possible heights could be two or three then it would be burnt. So, either I can burn from here or from here. So, if it comes if the height is three you will burn from the higher preferred one which is this one if it is two I will burn from here yeah. I take the first is the highest possible height in the next and next ok clearly the rules could have been slightly different, but it does not matter there is a one to one correspondence yes sir yeah yeah yeah you can do the reverse, but you know my fire is going inward and I am trying to draw the direction of burning. So, it goes in yes sir. Number of unburnt neighbors ok everything is ok. Just last question she is not set to find because any building burning passing this way is equal to the finding a spanning tree. Ok vice versa if you take a spanning tree and it burnt from here I know what is the height I should have had in order to have selected this edge for burning ok. So, it is perfect there is no problem it is a one to one rule for assigning heights to spanning trees ok. The interesting point about this rule is that if you give me a tree I can even have something like this. Oh let us draw it this I can assign heights to this tree, but it is a very non-local process the process of assigning heights to a tree depends on whatever happened is very far away in different places ok because I will have to see how much of things have burnt until now from the left from the right and so on and so forth. So, the rule is very non-local and so the local properties like heights of the sand pile become non-local properties of the spanning trees and vice versa it does not depend on the place where you start it only, but the time ordering has to be there no, no, no it is sort of sequential in time you have to you can make other rules, but in this rule there is a at time one you try to burn as many things as possible then try to burn in the next time step as many things as possible and so on ok yes sorry yeah, but from which direction that is all I have to decide it can be burned, but from which direction you choose the direction based on the height and allowed rules and then you are done ok. So, final site whatever it is whether it is height is 0 or 1 or everything else is burnt. So, I will burn it using whichever direction I have according to my rules 0 1 to 3 ok very good. So, that gives me 1 to 1 correspondence between spanning trees and heights. So, what is the use of this? Well the use of this is not so large as I said because the correspondence makes some local properties of trees very non-local in sand pile and vice versa. However, some properties can be still determined in particular you can ask suppose there is a big sand pile and I can ask what is the probability that this height is 0 ok. So, what will show up in the sand pile? In the sand pile it will mean that everything will burn and this thing will burn last. So, it says height 0 implies all neighbors burn earlier. The number of configurations of the probability of height 0 is equal to number of configurations of this type divided by all possible trees. This is the recurrent set my determinant delta set of all recurrent configurations right. So, what is the configurations of this type? That is the ones with height 0, but I can also do it in spanning trees. In spanning trees the site I add must be a leaf site because it has to burn last. So, height 0 means that my site my selected site is a leaf site. I guess the notation is clear leaf site means in a tree it is the end most. Now, if you have a tree like this this site is a leaf site, this site is a leaf site, this site is not a leaf site ok. All right. So, how do I count the number of leaf sites or what is the how do I find probability that this is a leaf site? This can be done very easily by saying that oh suppose I erase this site remove this site from the graph and find the spanning trees on the rest ok and then attach one extra bone and that will be the now the new site will be a leaf site ok. So, this is equal to number of trees site removed divided by number of trees ok because by my definition it can only burn if the number of unburnt neighbor is equal to 0. So, all the other sites must have been removed burnt before. So, then everything would have some burning path they would have all have burnt and now I will add one to this to make this and then it is over. So, it is a leaf site ok. So, now what is the number of trees with one site removed? Yes sir ok. The algorithm is that you come to a site and burn it at the time of burning more than one bonds for coming to that site are allowed, but the number of choices of coming in is equal to the number of heights I am allowed because you know I may burn a site when the height is 2 or 3 or something right. So, it turns out that the number of choices in of in arrows is equal to the number of choices of heights allowed and so I make a 1 to 1 correspondence by this my rule n bigger than e bigger whatever some rule or the other ok. So, number of trees with one site removed that is not a very hard problem what I have to do is I have to take this graph and remove this site then I get a spanning tree on the remaining set or I can find all possible spanning trees on the remaining set and then once I have a tree on the remaining set I just add a link like this that will make or coming in arrow sorry ok. So, this becomes equal to determinant of delta prime divided by determinant of delta delta was our original matrix delta prime is the new matrix which I obtain by deleting one site one row and column corresponding to a site ok. So, well this calculation of course is non trivial, but let us see how it is done. So, determinant of delta prime divided by determinant of delta. So, delta prime can be written as delta plus I chose a notation this one. So, the small delta is a matrix in which most of the entries are 0, but some place there is a plus 1 plus 1 plus 1 plus 1. So, 4 bonds will be converted from 1 minus 1 to 0 and the central site you can it series, but you can leave it as 1 if you like and. So, this new matrix is obtained from the old one by adding modifying a finite number of rows and columns ok. So, that is a small matrix is mostly 0 and this is the old matrix and. So, this is equal to determinant of 1 plus delta inverse delta, but now this delta inverse delta and this is a small matrix this is a big matrix n by n, but when I multiply this with this the non 0 entries will be in a finite region and the everything else will be fine because it is a small matrix right 4 by 4. So, then what happens is that this becomes equal to. So, if I write this as a matrix will be 1 1 1 1 1 1 1 1 and there is a small block which is non trivial small delta is the correction to delta which I have to add to make the matrix delta prime. Yeah, yeah the matrix is the same dimension, but it has got a 0 entries ok. So, when I write this of course, they have the same dimension that is how you multiply them, but now when I write delta inverse delta this matrix also will have very many 0s and small number of entries ok. So, then this matrix is still n by n, but most of the entries are 1 and then there is a small block which is non 1. So, then the determinant of this is equal to the determinant of the small matrix because the other parts is just 1. So, now, the calculation reduces to calculating the determinant of a 5 by 5 matrix the central site and its 4 neighbors those are the only ones where the entries are changed and the entries are delta inverse i j delta j k is the matrix element of the this block right. So, you need to know matrix elements of delta inverse, but they are easy to compute or you can do it or we have already done it in some way delta inverse was a matrix we calculated because we wrote down all the eigenvectors and eigenvalues of delta. So, then in the end this gives me this for this calculation the matrix elements of delta inverse are non trivial in general, but they have been worked out this called the lattice Green's function and I put in that stuff and I put in the 4 by 4 matrix and I do all the calculation just once again then I get the probability of height 0 is equal to determinant of a 5 by 5 matrix which I can work out and it gives me 2 by pi square into 1 minus 2 by pi yes no, but you can write it on the left or on the right and in each case you will get the same answer. I did not understand the very first passage when we wrote the determinant of delta prime delta prime is n by n matrix n by n you can put a 1 in the nth entry in the diagonal and keep it n by n because as the difference of the 2 of course and it will be 0 in most of the places and it is non 0 only in a few elements and so this can be calculated and it turns out to be this very nice looking number. So, this is away from the boundary this is deep inside where all the values of the determinant this delta inverse matrix elements are taken to be in bulk away from the boundary there is l is large here otherwise this numbers depend on the size. So, in the large l limit in the large l limit away from the boundary of course, it depends on the small delta is a fixed matrix no there is no choice about there it is a entries or 1 and 0 and of course, then for small delta there is no choice because it is the difference between the matrix with that those edges 4 edges coming to the site removed right. So, I have no choice about small delta it is a fixed matrix and I just calculate and that is what it gives except in the diagonal the diagonal you have to put 3 minus 3 because the delta inverse i j they involve 1 by pi actually it is a very nice non trivial result which you should look up to check what is delta inverse r plus m n the exercise I give you as homework exercise is like delta inverse of 0 0 1 1 0 1 it is like calculating the matrix element of delta inverse amongst nearest neighbors, but you can do the same problem with next neighbors third neighbors four neighbors and that is what matrix elements of delta inverse are and it turns out they are always of the form a plus 1 by pi b where a and b are simple fractions rational numbers and when you go out the numbers become bigger and the same thing happens on the triangular lattice you always get something plus 1 by pi times something else. So, why is there a pi there I do not know it just happens ok yeah just 1 0 anyway you cannot have two adjacent 0s you cannot have those are forbidden configuration if they are far ok. So, the Niel asked what happens if you have two 0s which are at a distance. So, I have a 0 here and I have a 0 here what is the probability of z r 1 equal to z r 2 equal to 0. So, the method is working all I got to say is that at both these sides both points nodes should be leaf sides ok. And then I can again construct the matrix delta which will be the difference matrix and calculate, but now the matrix will be 10 by 10 instead of 5 by 5 and I can work it out and so the answer turns out to be more or less if r r far away it is the square of this number, but so it is probability of z 1 z r 1 equal to 0 z r 2 equal to 0 is equal to probability of z r 1 equal to 0 probability of z r 2 equal to these are of course, equal, but there is a correction term which can be worked out from the properties of these determinants. Now, it is a 10 by 10 determinants so I got to be working harder you do not have to work out the full determinant it goes as 1 upon r to the power 2 d times a number. So, the matrix elements go like 1 by r and there are many of them and you have got to put large powers of them and then you can check that in 2 d it goes as 1 upon r to the power 4 in higher dimensions it will go as r to the power 2 d. This calculation I will not do on the board here you are referred to the paper which has a lot of calculation. So, very good so now what we want to do is we want to study a different problem just for me ok. So, the directed sandpiles is a particular version of these kinds of sandpile models and they are particularly simple and simpler than the one we just discussed and so there the answers can be obtained in somewhat more detail more easily with less work and so that is what we will like to discuss now ok. There was some question before yes Firstly, I can just calculate them and they turn out to be positive they were defined they were well defined problems I started with the well defined problem I have not made any mistake so far in the formulation. So, the answer should be positive see you are asking why should I have known beforehand that the answer is positive because I made a well defined problem whose answer I know is positive and I have not made a mistake so far then the determinant is positive what is the problem no absolute value the argument was good without the absolute value it was the probability of something it was a number divided by another number. So, if I have calculated them correct they should be positive this one three yeah yeah yeah yeah yeah it is the number of spanning trees no it is positive it is not true for all matrices delta all the matrices delta which correspond to the adjacency matrix of a graph we will have a spanning tree interpretation we will have positive numbers by definition the delta is not any possible delta it is the delta which is the adjacency matrix of a graph then the result is positive ok. So, directed senpiles so the origin of these senpiles for us was because I listened to a talk by Pearbach where he described the senpile model and he said that this also explains how the river Nile has undergoes this very huge fluctuations and so on and so forth and we did not believe him. So, we saw we will try to disprove whatever he says. So, we tried to construct a model where this argument does not work and so it turned out that that model was much simpler than this starting senpile model and we could actually solve it exactly without too much work it will be done within the remaining 1 hour ok. So, that is what I want to describe to you, but let us pose the problem originally. So, the problem is posed as there is a child with lot of wooden blocks and he is sitting on the top of a staircase and he throws the blocks on the staircase and so the staircase kind of looks like this you see sort of the picture and the blocks look like this there is a block here and there is a block here and then now he throws another block. So, now the block may sit on top of each other, but we said that 2 blocks on top of each other are unstable. So, they will not be able to stay there they will topple they will topple down to the floor into the rung below and they will go to adjacent sides below and so they will then sit here and then you know then if he throws more then even these will go down and there is the mother at the bottom of the staircase who is playing with the child. So, she keeps on removing all the blocks which come to the bottom rung and gives them back to the top ok. So, this is the model this is a actually a better model of sand piles because it is well known that in real sand piles there is a layer which is kind of rigid and there is just a top surface which fluctuates a little bit and the rigid layer below we are replacing by a staircase and the top layer which fluctuates we are replacing by these wooden blocks which have height 1 or 0. So, it has a random fluctuating height and it has a special extent in this direction. So, the formal model is defined like this there is a lattice which is the square lattice, but tilted and so we say z i is equal to 0 or 1 are stable. If z i is bigger than 2 bigger than 1 then 2 blocks 2 sides 2 drop down 2 sides below. So, from here 1 will go here and 1 will go here from here 1 will go here and 1 will go here like that ok. So, we will take periodic boundary conditions in the horizontal direction this will be called m this will be called l and there is a bottom. So, m is the number of distinct sides on 1 layer and there are l such layer this is the 1st layer, this is the 2nd layer, this is the 3rd layer, this is the 4th layer each layer has exactly m sides ok is the model clear. Yes except at the boundary when they are at the boundary the blocks leave no the she puts them in a basket in the top. So, that the child can throw it whenever it wants ok any other questions. So, actually if you go back and check all the arguments about a billion property whatever are working and so the determinant is equal to the determinant of delta for this problem ok, but now I have a nice matrix delta the matrix delta is of the type like this I will write it in a very schematic way it is 2 2 2 2 0 and minus 1 minus 1 it is an upper triangular matrix because everything is connected to things below and things below do not have any arrows coming up. So, if you have a neighbor it is that way and that one has a neighbor that way and so the matrix delta is upper triangular is this point clear. Upper triangular means a matrix whose all the entries below the diagonal are 0 because there are no neighbors which known a site below is not connected to a site above ok. So, it is a directed sand pile it is the matrix is unsymmetrical now ok, but now it is very easy because the determinant of this matrix is easy to determine it is just the product of diagonals. So, it is 2 to the power l m determinant of delta is equal to 2 to the power l m that did not require any work, but 2 to the power l m is also the total number of stable configurations. So, all stable configurations occur with equal probability in the steady state ok very good. So, stable configurations occur actually all 2 to the power n 2 to the power l m stable configurations occur with equal probability in the steady state. So, this problem now becomes much simpler than the previous one because the steady state measure is much more easier to handle. So, for example, I can ask so, I take this pile I keep it running and then I add one grain probability oh I pick this site some site in between what is the probability that the height there is 0 it is half because all configurations occur with equal probability and all the measure is a product measure is that point clear you are not convinced it is ok. So, probability of any site this site is 0 this site is 0 what is the joint probability that 2 sites are height 0 1 by 4 what is the probability that this site is height 0 this site is height 1 it is also 1 by 4. So, it is very easy to calculate all these marginal joint probability distributions which we could not do in the other sand pile problem. So, easily you know I said there is at this 10 by 10 determinant do not ask me how to do it here on the board here you ask me I will do it on the board ok alright. So, then I can ask the questions about avalanches I add a grain what is the probability that it will just go there and not cause any avalanche whatever is the result of the relaxation process for this particle to relax until it gets to a stable state there is there are a number of top links and the number of top links will be called as in the duration of top links is the time it takes to get there ok. So, when the number of top links is 0 then we say there is no avalanche. So, what is the probability that when you add a grain just becomes stable and there is no toppling half because if the original height must be 0 and that occurs with probability half. So, probability s equal to 0 is equal to half I can ask what is the probability that s equal to 1 well there should be a toppling, but there should be only one toppling. So, I take this site here the height should be 1 because it topples, but then it will send 2 particles down on addition and both of these should be height 0 otherwise they will cause further toppling. So, this should be 1 and this should be not top this should topple and this should not topple and. So, this occurs with probability half times half times half which is 1 by 8 that was not a lot of work what is the probability that s equal to 2 ok. Now, there are different possible avalanches I can make one is that this site topples it throws out 2 particles and this site topples, but then this throws out something, but this does not topple and this does not topple and this does not topple that is one possible avalanche of size 2. There is another possible avalanche which is this one that this site topples this site topples, but this site does not topple and this does not topple and this does not topple that is all that can happen nothing else. What is the probability that this configuration will be reached in the random starting point well this should occur with probability half this should be 1 this should be 1 this should be 0 0 0 1 upon 2 to the power 5 and. So, this answer is there are 2 such configurations and then I can go to probability of s equal to 3 the key point is that in each case there is a finite number of graphs to be counted and you just count them all and then you are done. So, what are the possibilities for s equal to 3 well you could have something like this this goes here this goes here this goes here and then stops. So, this should not cause anything this should not cause anything or you can have a graph which looks like this or you can have a graph which looks like this or you can have a graph which looks like that anything else? no well because in principle there in principle they could be something like this causes to toppling, but these do not topple sorry no further topplings that is not possible, because if these two topple then this side whether it is height was 0 or 1 will also topple. So, this will not be allowed so this is not there these are only 4 possible graphs. So, it is 4 into 1 upon 2 to the power 7 and so on. So, I am actually able to calculate the probability of exactly s topplings for any finite s by a straight forward finite procedure like I mean I did not do it, but I can do it up to 5 by just sitting down on a piece of paper. And you can write a computer program which will count all the graphs and do some such thing for a slightly bigger value of s. And so then this particular question what is the distribution of sizes of events and we have been able to address directly simply in this problem yes precisely. So, now, but you know this is a finite s, but I would like to know what is the behavior of this function for large s. So, then so let us look at that problem now. So, I start with this site oh sorry I start with this site here and then they topple and let us say these two things topple and then it goes down and this does not topple, but we said we already noticed that if a site has two upward neighbors which topple then it also has to topple whatever happens ok. So, the allowed avalanche clusters will be of this type they will have a boundary they will be a set of topple sites each topple site will topple once exactly and this cluster will have no holes ok. So, the cluster is fully specified by its two boundaries left boundary and right boundary and what does the left boundary do when the particle is here it comes you know it throw the particle here, but it may be 0 or 1 if it is 0 then this is not part of the cluster if it is 1 it is part of the cluster. So, each boundary does an unbiased random work with probability half it goes left with probability half it goes right as it goes down ok. And when the two left and right boundaries meet then the cluster is finished and stopped. So, the problem becomes the problem of two random workers on a line with unbiased random workers and when they meet then the cluster stops ok. So, that problem has been studied a lot already you know I did not have to solve it because it was give the answer was given in failure and so you can ask. So, this is done with probability that t equal to 0 which is the duration of the avalanche ok. So, that is still half s equal to 1 it is the same thing 1 by 8 and I can do this, but now in my new problem it is that there is a left boundary and there is a right boundary and they when we what is the first time the two workers will come together at the same site that is the question. T is the duration of the avalanche which is how many time steps are required for the avalanche to be over and if s is 0 then t is also 0 when s is 1 t is also 1 when s is 2 t is also 2, but they are not same statistics they are different things. So, later on s will be much bigger than t may be right. So, now I am asking what is the distribution of sizes t and the answer is probability of t is equal to probability of reunion of unbiased random workers on a line. So, this one is known the answer is already known. So, the probability is 1 upon 2 t plus 1 I noted this down just so that I do not make any mistakes in algebra t plus 2 choose t plus 1 4 to the power minus t minus minus t minus t minus 2 refer to failure for proof is the problem statement clear. We said that the avalanche will form clusters the clusters will have no holes and the boundaries of clusters do random works and then you ask what is the probability that two random workers starting from 1 1 will meet at some point yes last part bigger just one second let me first write the last part bigger. So, this is equal to 1 upon 2 t plus 1 t plus 2 t plus 1 this is a combinatorial factor Bernoulli coefficient and 4 to the power minus t minus 1 check when I put t equal to 0 I get 2 1 upon 2 1 4 to the power minus 1 1 by 2 it is working at least at t equal to 0 the equation works. So, you can check that it works for other values of t also exactly. So, S is the number of topplings in the avalanche t is the duration of the avalanche duration means how many time steps before it gets over on each time step I do several topplings on the same layer all the sides on the same layer are toppled together. So, the point is this. So, there is a cluster now this if there is a hole there is a top most side of the hole and you look at this. So, but now all the two sides above it must topple and so it must also topple so end of proof. So, it is very simple and straight forward and so it says probability of duration of avalanche greater than t goes like 1 upon t to the power half this is from random walk results or from that equation you put sterling formula you get this. So, now if you take a big avalanche like this it is duration is t then the width of the cluster will be t to the power half and so S goes like t to the power 3 by 2 this one. So, if you take a cluster we have already said that it is formed by two random walks which meet a time t. So, on the average the width of the cluster which is defined as the mean width will go as t to the power half S t to the power half and the total number of topplings will go as the width into height. So, it will go as t to the power 3 by 2. So, the S goes like t to the power 3 by 2 and the probability that number of topplings is greater than S goes as 1 upon S to the power one third and so this relation is you know this equation is also exact. So, we have exactly determined the distribution of event sizes for this directed center model ok. So, now in the remaining time what should I do yes please yes I do not know about that it is you know I refuse I am size stepping the question we do not want to discuss it now ok. No this is a it is called a special case of compact directed percolation compact is precisely the fact that the there are no holes. So, if you make a directed percolation in which there are no holes then this is the same model ok. So, now what we will like to do is to describe other models which are equivalent to this model ok. D A S M is directed a billion cent pile model directed is just the fact that the delta matrix is upper triangular and the bones are directed ok. So, firstly you can do this in higher dimensions there is no problem everything goes through the extension of the results to higher dimension is straight forward not trivial, but straight forward ok. So, I will not do it here. So, there is a model which is in literature called Scheidegger river networks it is a very nice and interesting model and it was proposed by a hydrologist to describe the shapes of river networks. So, let us just backtrack a little bit. So, the general idea of river networks is that there is some landscape and some rain falls per year and whatever water comes some of it is evaporated some of it goes under the ground, but the rest of it goes off into the sea. How does it go into the sea? It just flows down hill and it forms tiny streams which sometimes merge to form bigger streams and those streams merge to form bigger streams until you have big rivers and finally, the river goes into a sea. So, these networks in general are called drainage networks. So, you imagine that this region has some water which is coming down and all this water has to drain into the sea and what path will the water take to drain? And the answer is that the path the water takes is a tree graph usually. Once in a while people know of rivers which go like this like in Paris I think there is a river in the middle of the river there is an island and so on, but these are actually not so common actually they are quite common in Bangladesh the river looks like this. If I lot of parts which cut and recombine and so in delta regions generally you have a different structure where the different streams can rejoin, but in most of the other parts the streams do not rejoin. If sorry the streams join together, but they do not break up one river does not become two rivers two rivers become one river most of the time. So, this model only describes the first process in which small rivers join up to form big rivers which is roughly true in mountainous areas and so we will not discuss I am only pointing out that this model is not perfect but it is a approximate, but interesting good model. So, what Scheidegger said was that you imagine that let us say there is a landscape like this there is a uniform slope down. So, we are not worried about the fact that some regions can have higher slope and other places have lower slope. So, and all the water which goes which comes to a point has to go down, but there is also a rain at each area. So, the some more water joins to it and then it goes down this excess rain per area is assumed to be uniform. There is a uniform rainfall all over the land area and now we can represent the land by some grid. So, I take this grid take as big a grid as you like and then the on the top there is one unit of water which comes per year and this water flows down. So, where does it go this water can flow down with equal probability this way or that way you know the river channel forms somehow and then in the water just flows along the river channel. So, the model assumes that each node there is a particular direction chosen at random by history where the water goes. So, at each node you assign a random downward direction here it goes this way here it goes that way here it goes that way here it goes that way and so on here it goes this way you assign a downward direction randomly to each node and then you look at the structure of the full graph. So, here is the sea. So, you know whatever goes down is what goes down from here it can go here. So, the claim is that what you get is a spanning tree graph it is a spanning tree graph because from each node there is a unique path to the sea and then right now this. So, this describes a river network these rivers you go now from here it has to go somewhere let us say it goes here like that and that was proposed as a fair model of the river networks in sort of areas with the reasonable slope. So, now what can I say about this network oh well I can say what is the flow out of a node. So, this node has flow 1 this node has flow 1 this has two things coming in, but the things going out of. So, here the flow is one unit here the flow is three units here the flow is one unit here this is one here there is three here this is one this one will be plus 1 4 plus 1 5 and so on. You can calculate that total flow down the river at any node and then at any node there is a region which is called the catchment basin of the node all the water which falls in this region goes down this river goes down at this point through this point. So, now can we describe something about the statistics of this network. So, there are some very standard laws of hydrology which are defined like strahler rank of a river. So, the strahler rank of a river is defined like this if you have a network like this of rivers all the rivers which come which are the leaf nodes of network. Now, the I am using the word leaf for branches all the branches which end some place they will be called rivers of rank 1 this is a river of rank 1. Two rivers of rank 1 when they join they form a river of rank 2 when a river of rank 2 joins a river of rank 1 it is called a river of rank 2 it does not change its name, but when two rivers of rank 2 join they form a river of rank 3 and so on. So, you can start with very tiny regulates and then they form a river of rank 1 rank 2 rank 3 and so on it goes on and then given a big network you can ask how many rivers are there of rank 1 how many rivers of rank 2 how many rivers of rank 3 and so on. And there is a very interesting law which is called Horton's law which says number of rivers of rank r divided by number of rivers of rank r. So, r plus 1 it is a number like 4.3 and it is independent of r whichever network you pick and it has been tested a lot in real observations in real maps people make and you know it is interesting if you can understand where this law comes from. So, now suppose I look at the catchment basin of the river living at a particular point catchment basin of the river living at a particular point the claim is the statistics of this catchment basin is exactly the same as the catchment of Avalanche being of that ship the proof we will give will be by construction. So, it says what is the probability that I pick this site and nothing is coming into it nothing is coming into it is the probability that this site has an arrow which is going that way and this site which is arrow which is going that way right. So, what is the probability 1 by 4 the probability that a site will have no river coming into it is 1 by 4. So, what will be the flow out of that point will be 1. So, what is the probability that if I pick a node at random it will have an outflow 1 I pick a node at random you know. So, in the network then sometimes lot of river water is flowing down sometimes less. So, what is the probability that water flowing down is 1 nothing is coming in only 1 unit is going out which is local. So, probability is 1 by 4 we just calculated. So, very good. So, what is the probability that flow out is equal to. So, other possibility is that there is something which is coming in, but nothing else is coming in. So, what will be the flow out of this river 1 and 1 2 what is the probability that I pick a site at random that it will have two things river going out will have flow two units it must have a catchment basin which is two units because all that water has to flow down. What the probability that catchment basin is two units it can be like this or it can be like this what the probability of this well this should have nothing coming in this should have nothing coming in this should have nothing coming. So, the arrow here should be this way the arrow here should be this way this arrow should be this way and that is all and sorry yeah and that is all right. So, I can calculate these explicitly and you will see that you encounter exactly the same types of you know you if I want to find probability that the cluster is of size 3 then I had this graph and that graph and that graph you count them they are exactly the same. So, the statistics of clusters in the Scheidegger river basins catchment basins in Scheidegger network is exactly the same as the statistics of the avalanches in the 2D directed abelian sand pile model. So, then we can solve we have solved the problem in some way what is the probability that the catch flow out will be bigger than s is s to the power minus 1 by 3 because it is the same as the probability that the number of top links caused by an avalanche is bigger than s. So, it is a non-trivial result about a model which was already known, but now you can solve it exactly you know this model is I think 1950 or so Scheidegger model, but they were hydrologist they were only doing field data they were not making lattice model they the model is a lattice model it was made by him, but they did not analyze it in this way, but now you can kind of show that of you know it has this distribution of size ok, I still have time. So, very good I can do something more. So, this Strahler ranking of reverse is also valid for other networks which are not directed and one can study them in more detail which I will not do just now, but let us do this Takayashu oh I made a mistake aggregation model this was already defined. Yes, sorry no no no no if you have a river like this the rank here is always 111111 and here the rank becomes 222. So, it is a different statistics the flow out will change, but the rank will not change no no no. So, here in the Scheidegger network there are always some nodes here which do not get anything from above. So, that is also a leaf node has no input it only throws out something the number of such nodes is density 1 by 4 with probability 1 by 4 each side will have no upward neighbors which come with no water coming into it right. So, lot of sites will have just one tiny stream of rank and one flow unit coming out the rank will be 1 by definition to these nodes. So, then you have to look at the statistics of how they are merging. So, they may be a river which is rank 1 because you know nothing is joining it if something joins it then this becomes rank 2. So, you have to look at the topography of the full graph then you can assign ranks rank is a known local statistics it cannot be decided on the basis of local measurement. So, it is a rank no it is you know like it is lot of students appear for example, some people get rank is like division it is a class it is a broad you know first class second class third class like that no no no no no. So, let me draw a network ok. So, this is rank 1 1 1 1 this is 2 all this river all the way will be rank 2 this is 1 1 1 now this join this is still rank 2 this is 1 1 1 this is still rank 2 this is 1 1 1 this is 2 2 2 this is 1 1 this is 2 this is 1 this is 2, but this is 3 and this goes to be 3 and this is 3 like that. So, it is not related to the nodes at all the rank defines the number of I will give you some logic behind this. So, at least in India there is a river you know very famous river Ganga no, but in the beginning it has small streams. So, there is a stream called Alak Nanda there is a stream called Bhagirathi they are two names, but when they join up that river is called Ganga the river has a different name after that two small streams join none of the two small streams qualifies for that name ok. So, there is a point in this if a big river is joined by a narrow stream Nala just a drainage from one tiny house we do not change the name of the river it has the same name, but if two big rivers join then you perhaps want to change the name of the river no no no it makes sense. So, the rank is like the name of a river which is conserved under tiny perturbations, but not perturbed under equal perturbations because now two rivers are joining of same rank then which one will keep its name which one will not keep its name. So, then we just give it a new name to the new river. So, that is the logic behind the rank yeah catchment basin suppose there is a river network and then you look at a site all the water which comes flows out of this site has come in as rain from some place. So, all the area the water of which is flowing down this point is called the catchment basin of the river at that point. So, now we want to go to this Takayashu model and there was a question. So, Takayashu model we defined already there is a lattice every site you add one particle at each time and then the particle jump to neighbors at random and they merge. So, now initially there were particles here you know say I start with empty thing then you add one particle at each site and this particle jumps here this particle jumps there this particle jumps here this particle jumps there this particle jumps here jumps there now there are two particles here, but you add one. So, it becomes three and then this jump it goes here and this one goes here this one goes here this one did not get anything. So, it has only one particle coming in and then this one jumps here and so on right. So, this was the Takayasu aggregation model which was describing the dynamics of aggregating masses on a one dimensional line, but the way I described it I guess is the same as the river network model. You just have to identify the time direction as the flow of the river downward. If you go from the top level to the next level to the next level it is exactly the same dynamics. So, now I can again ask that if I have this Takayasu aggregation model the masses mean mass keeps on increasing with time, but if I look at a particular time what is the probability that a particular site will have mass only one at the end of this process sorry let us say. So, the process is like this there is a time unit then you add and then you jump and then you coalesce and I want to measure what happens now. So, I have a long line I can ask how many of the sites on this line will have weight 0 no particle there the answer is 1 by 4 because you know if you are at this site something must be jumping from the left something must be jumping from the right, but if nothing jumps in then the height will be 0 now. So, what is the probability that whatever was here jump chose to jump this way and not this way and this one jump that way and not this way is 1 by 4. So, the mean mass may be very big, but fraction one fourth of them have mass 0 then what is the fraction of sites which have mass exactly 1 the same calculation which we did before which we will not repeat gives you that the finite fraction of sites have mass only 1 and a finite fraction have mass 2 and so on. So, this stucco-ashu aggregation model is the same as the directed model. As a model of SOC what it says is that the probability distribution of mass m at a site as a function of m has this steel which now we are saying is m to the power minus 4 by 3 because the cumulative probability goes down as one third power and the mass distribution goes as minus 4 by 3 the mean mass will be infinite this is a. So, if you work at finite time the distribution goes like this the maximum mass increases with time and the movement increases linearly with time and in the steady state which is not fully defined, but this distribution tends to a finite limit and if you define that as the steady state then that is the distribution what are the big events there I sit at a place and I ask what you know if some big mass comes there that is a big event and then if the next time step it will leave and it may come back after 2 years. So, that is the time interval between big events and you can ask what is the probability of you know that at any time there is a big mass happening some place or something like that and so that model is a interesting model and tells us something about SOC and it is the same as a simple model we have already solved and the same thing happens for Takayashu model in 2 D or the Scheidegger model in 2 plus 1 D what is a 2 plus 1 D river network model. So, you have a volume in which rivers have to flow. So, it is actually well known it is called the blood circulation network the point is you have tissues in a human body they are made of cells each cell needs nutrient the nutrient is provided by blood, blood comes and gives the nutrient. So, there is a cell network of capillaries which provides each cell with some blood and so if we think of it as a so here is my set of mass this is some muscle and here is my nutrient this has to go and reach all the cells here it forms this network of capillaries which will reach all the points in the cell. So, this is network is a tree it is a spanning tree now it is in 3 D it is a 3 D spanning tree. So, the statistics of such networks will be interesting in understanding capillary network of the veins and arteries in of the blood in animals. Let me just point out that when in this model we distinguish between supply networks and drainage networks. Supply networks are the ones which supply something and once it is supplied then you do not worry about it you know it is like it is like postal service you know there is a lot of postage and that postman has to come and deliver to each house something, but there is also a second thing which is called drainage which you already introduced which consists of not supplies, but refuse you have a lot of waste product and you have to collect them and garbage, garbage collection networks all the garbage has to be collected in trucks which have to go to another place and so on and so forth. So, we are distinguishing between the supply networks and the collection networks, but maybe you know so there are veins and there are arteries and I want to distinguish between the two ok. So, if you look at just the veins then they form a network like this if you look at just the arteries they will also form a network like this, but if you put them together then you are getting confused it may be that the biologist cannot always tell if a particular blood capillary is inflow or outflow ok. In this simple model we distinguish then you can worry about how to get you know how to take care of this confusion that we will not do ok. So, very good now I have only seven minutes left yes please not exactly they are all yeah the main difference is that this one is a constraint of spanning tree means every side should be covered by the network while in electrical discharge there is no such requirement. You know the electric spark occurs from there to there all the other places there is no spark it does not bother them. In our problem we have insisted because of various physical constraints that every side should have an part of the tree coming spanning tree it has to be a spanning tree if you do not have a spanning tree then you do not have this structure necessarily ok. I think people have argued for river networks are like electric breakdown you know so they say some water is going down it can go further left or right and just moves like that yeah I am not particularly fond of those models they have not worked very well in the past it is one thing to propose a model because I like the model the other thing is to make sure that it is actually realistic. So, the electric breakdown models are I do not think are honestly good models of river networks a spanning tree model is philosophically is definitely better because we say that you know every side should have some rain and it should have a drainage out. So, what people say is that yeah what people say is if I take a map of some country and I look at the river network of the in the map I do not see a spanning tree the reverse do not span all the map know it is be too cluttered. So, in real maps what people do is they only draw a reverse of a rank bigger than something they do not draw tiny and tiny rivulets tiny drainage from city each city will have 15 houses each one has a one drainage out we do not draw the all that. So, in a typical map you will only draw map reverse of a rank bigger than 5 then if you want more then you will draw ranks bigger than 4, but if you so if you take a river network and delete some of the reverse is too small we I am not going to worry about them if the flow is less than so many q sex of water then I am not going to draw them then you do not get a spanning tree, but then it is part of the original spanning tree which you have deleted some edges and kept some others and so on. So, you can keep you can have that in mind then you can study the statistics of these graphs as well which is a little bit more complicated, but it is the same it is saying that what how many sites have flow out which is bigger than 100 units right. So, that is the question we kind of addressed already yes yes and no yeah. So, on the whole if you have a network like this so you can look at the actual implementation you can look at river geographical maps and you can check that they do not seem to fill all space the reason is to fold sometimes that data is sparse in very mountainous regions there are very tiny rivulets and I do not have data you know driver is not always flowing and all that kind of problems. So, sparseness of data is one reason, but as we said if the rivers are small you do not draw them then it turns out that the you can easily check in this case that it is not the same thing just coarse graining will not make us spanning tree a known spanning tree you have to put this extra constraint about flow bigger than something rank bigger than something then you get a known spanning tree graph ok. So, last item fragmentation patterns formed by branching merging propagating fragment by propagating branching and merging cracks ok. So, we imagine that there is some 2 dimensional sheet and under stress of various source it develops cracks and these cracks then propagate and then when they propagate they can branch into 2 and then the branches can propagate, but if they may be another crack and they merge if 2 branches come together then they merge ok. And then it keeps on going and then you get some pattern of fragments of the original stuff. So, you can look at crockery and you can see crack patterns on crockery and you can ask what is the statistics of these fragments in this right that is a reasonably interesting question. There are other fragmentation patterns which are like you look at land and then when the water dries then the mud cracks and you get sort of crack patterns in mud like that I guess all of you are familiar I am not showing pictures is that familiar ok. So, very good we want to understand the statistics of these cracks. So, it turns out that there is a one particular system which is of some interest which is glaciers which merge into the sea. So, in the north near the poles there are lot of glaciers and they have ice and then there is a sea here and what happens is that this ice keeps on developing cracks and the cracks go into the sea north sea and then they go away and they melt somewhere and then the water goes to Saudi Arabia or where you know it joins with the normal sea and there is a question that maybe the ice in the polar region is melting too much and if it melts very fast then all the sea levels will rise and lot of cities near the sea will be submerged and so on and so forth. So, there is a importance to understand the sea level rise expected in the next few years in climate change models and there it is important to figure out how much there is the ice melt goes into the sea and so, there is a model of the melting of the glaciers to go into the sea and these things are called curving termini. So, curving is calf which is this baby cow there is a mother cow and the baby is called the baby cow and the small part which breaks off is called the calf and this process of breaking is called curving and it occurs at the terminals of glaciers. So, that is what we are trying to make a model of. So, the model which was already proposed by other people is the following that here is my glacier and here is my sea. So, a crack forms somewhere near the bound somewhere you know let us not worry where it forms crack forms then it propagates roughly it propagates along the front you know because this is hot and this is cold and it propagates along the bound in the direction, but parallel to the sea direction. But it diffuses a little bit it can shift a little bit to the left it does a random work of sorts. But it can also develop branching with some finite probability a crack will become two cracks and they propagate and then this can become three cracks and so on and then if two cracks come together they can merge. So, this is all that we will like to model. So, I take a finite system my time is up let me take three minutes. So, here is my system here is my time there is a crack which propagates it breaks and then rejoins and then breaks and rejoins and breaks and rejoins like that. And so, once the crack has gone further you are left behind with the fragmentation pattern the glacier left behind is fragmented fully into small bits. And the sizes of these bits is what is important because you know each fragment falls into the sea together and the rate at which you know these fragments form or something is what we are trying to study. So, the model is like this that there is a rate at which it breaks and then it diffuses randomly and then if they come together they join up. Now, what is the steady state of this process? The point is that even if you start with only one crack as time evolves the steady state of the system has a finite density of cracks because one will because if there is a few cracks they will break up if there are too many cracks they will merge. And so, there is a steady state density of cracks that steady state density is a function of lambda branching ratio if there is a small branching there will be few cracks right. But, then what happens in this steady state then it keeps on evolving just like that. So, the fragmentation pattern in the steady state is independent of initial condition and then you can show that it shows a distribution of sizes which whatever it is what is it that is what you want to determine. So, the answer is if lambda is very large then there is a pattern which is not easy to study, but if lambda tends to 0 which is a or is lambda is small that is branching ratio is small. Then the statistics of these clusters is like the statistics of this avalanche clusters in sand piles because the branch does a random work until they merge. There is only a small bit of correction I have to add due to the fact that oh if this one branches then I am more likely to go in. So, here the branches is the left branch is more likely to go right because if it the left boundary is more likely to go right because if it branches then one of the part I will take will be to the right ok. So, with this minor modification if you study the model then the distribution of fragment sizes can be interpreted in terms of the previous model we have studied. So, we will leave it at that actually it turns out that you know you can simulate this in computer and you can look at the pictures of cracks which are already shown in ratios and the pictures seem to match quite well. So, it is not an awful model ok. So, we will leave it here.