 which is beginning to be of more interest, especially in the neuroscience community, and it is a connection between the whole process of synchronization and physiological processes that could be of some significance and interest. I don't know a lot about the neuronal avalanches business mostly because I don't work in the area, but I thought that this is an important type of collective behavior because thinking is a collective behavior on the part of our various parts of our brain. Just at some ideological level without really knowing much about the mechanisms that go on. As a precursor to that, let me just briefly, yeah. Okay, so in a sense, this idea of self-organized criticality and whatever, looking at avalanches, et cetera, it has its origin in the following statement of Mandelbrot where he says that, I mean, he points out the obvious, that clouds are not spheres, mountains are not cones, coastlines are not circles and bark of a tree, that you see over there, is not smooth, nor does lightning travel in straight lines. What he was alluding to, and this is an early statement, it's even before 1983, he just wrote this in the book, what he's alluding to is that is the unfortunate habit of physicists, particularly if you want to take an integral over a surface. Let's say you want to integrate the heat coming out of a cow. So you start with the approximation that let us assume that the cow is a sphere, all right? So the spherical cow approximation is something which is familiar to all of us. We've seen this being done, okay, to first approximation, the universe is uniform, homogeneous and infinite in all directions and all that and then you start worrying about the actualities. So what Mandelbrot was pointing to is that the fact that a cloud is not a sphere tells you that the distribution of sunlight reflected from a cloud, the albedo, is not going to be the same that you calculate from a spherical cloud. So you have to worry about the fact that the cloud has got features on all scales. So he introduced the idea of a fractal geometry and okay, there's another statement up there. Finally, there was important work by Pietronero in the 90s, early 90s, where they calculated the dimension of the observable universe and found that it was approximately two. As opposed to a three-dimensional universe, if you just look at the light sources that is the stars, if you just look at the star catalog and ask what is the dimension of that, that turns out to be closer to two than to three. All right, now this business of having structure on all scales is shared by many natural objects and that's a picture of a broccoli and in a very influential paper in 1967, Mandelbrot again drove home the point that the length of an object is basically depends on the scale at which you measure it. Of not of every object, of some objects, could depend on the scale at which you measure it. So then he had this rather nice title, so how long is the coast of Britain? So if you take a map of Britain and you use a ruler of length 250 kilometers, then you'll find that the length of the coastline, which is to go all around it, is only 2,500. By the time you reduce it, the length actually increases and then you go down to a 25 kilometer ruler, the length is almost doubled. Now you can imagine if you go to a one kilometer ruler, it'll get longer and longer, but then Mandelbrot turned the question around and said what is the scaling behavior of this length depending on the scale? So if you take simple geometric objects like the line, the square, or the cube, I know many of you would have seen this umpteen times, but just for the few who may or may not have seen it. The length of a line will not depend on the scale used to measure it because if you measure it with a unit of one, the length is one. If you measure it with the unit of a half, there are two halves, so that's again one. With one third, it'll be three of them, so it's still one and so on. So the length just depends on the scale to the power minus one, right? If I take the square for example, if I use a unit of one, here the area is one. If I use a unit of a half, the area is four of those. If I use a unit of a third, area is nine of those. So there the scaling is r to the minus two. And here same formulas, we come up with a formula, we come up with a dimension of r to the, it goes as r to the minus three. So the scale, so the dimensions are just one, two and three for these familiar objects. Okay, so we have these integer numbers one, two and three, and there are different ways of defining dimension, but I'm going to use this, because this extends to other objects. If I take the, this is known as the Koch snowflake. The Koch snowflake, if I take a length of one, my length is one, if I take a length of one third, my length is four thirds. So I need four of those units. If I take a unit which is one by nine, I need 16 of those units and so on and so forth. So if I take, I just continue this process indefinitely. If I take a length unit which is at stage n is going to be one over three to the n, yeah? No, it's the same, meaning you take the initiator and you put, remove the middle third and put a hat. The length of this is the same as the length of this, all right? So the length over here, if I take this as length one, and this is now one third, and these are four times one third, okay? So this r over here is just telling you by what scale you are reducing it, all right? Now if I take this line over here, I remove the one missing third over here and put a hat over there, then I get this. And I do that for every line. See, this entire thing is here and this entire thing is like that, all right? So over here now, this unit length is one by nine and there are 16 of them. So the total length from here to here is 16 times, 16 upon nine. So the initial length was one. The second time the length is four by three. The next is 16 by nine and so on and so forth. And if I go to the nth stage, the total length is going to be four over three to the power n. So the length of this curve which is, let's say at the infinite stage is infinite, diverges, right? But the way in which it diverges is four over three. So when I put this formula in, the dimension that I get from just implementing the formula is the log of four versus the log of three. Sorry, log of four divided by the log of three and this is not an integer. So a curve which, and what we notice is that this curve basically looks like this curve, just scale down, right? So, okay, so you can compute the dimension of the Cauchy snowflake. Yeah, yeah. Yeah, I mean this is just our definition that we applied to the line, the square, the cube, et cetera. That was the way in which we said the number of boxes required. Okay, so these are all perfectly mathematically self-similar. What we mean by self-similar is that this part basically looks like the whole part shrunk particular scale. And this part over here is again, multiplied by three, you're gonna get this curve. Multiply that by three, you're gonna get this curve. Again and again and again, yeah? This feature of having structure on all scales. See, the thing about a line or a square is that once you keep expanding it, you don't get anything, it is self-similar but you don't get anything particularly new. Whereas curves like this, if you stretch it out, it looks like itself and stretch it out, it looks like itself and so on and so forth, okay? So, yeah? No, no, no, no, no, no. You can put this up, it doesn't matter. Yeah, see, all right, you just wait until the next example. See, this R is not, the scale at which you're measuring it is one over R. Yeah, it's the size of the, if I take that to be one, then when I say R is equal to three, I mean that the scale is one over three. When I say R is equal to nine, I mean that the scale is one over nine and so on, all right? By the time you take logarithms, you'll work it out so that the dimension is a positive quantity and formulas will all work out, okay? So, D is not an integer but it's a fraction of some kind. So, Mandelbrot called these fractals and the value of this dimension over here is something like 1.26. So, if you want to think about it, it's a little more, a line has got dimension one, so this is thicker than a line and it's thinner than a plane, all right? So, it's somewhere between a line and a plane. Now, fractals are not exotic objects even though they have a dimension which is not equal to one, in a sense, so what, right? And a number of such objects have been discovered and studied over the years. This is something called the Sierpinski gasket. You can define it in one, two, three, et cetera. And like I said, the universe itself has got a fractal. The fractal dimension doesn't necessarily have to be non-integer, although in most cases they are. You can have fractals. This is objects with this kind of shape which are actually integral but that's a... Now, these were all objects that were constructed recursively for some strange reason that changed on my screen but not over here. Okay, what's shown over here are river basins. I'm sorry it hasn't shown well, but if you just take a map of the earth and you look at the flow of rivers, then this kind of scale invariance is evident in some sense, but these are not perfect scale invariant curves like the Koch snowflake or the Koch curve or any others that I will show, but these are statistically self-similar. So if I take this blue river basin and I expand it, it will look pretty much like the rest of the curve. And here are some other examples from physics. This is what's called a diffusion limited aggregation, DLA. These are objects that are formed by particles aggregating like soot or I mean a lot of particles aggregate and they grow by a simple rule. You have a particle at some position called at the origin, put another particle at infinity, let it come diffusing. If it comes to the neighborhood of this original particle, it sticks. Then you release another particle that will come and it'll stick somewhere or the other. And as they get bigger and bigger, this is how they tend to grow. So this is one which is really not really large but large enough for you to see that one part of it looks a lot like a smaller part of it, except perhaps for that point of symmetry where the initial one was. But statistically speaking, these all look at any scale like each other. So this is a naturally occurring object that has a fractal dimension. It happens to be 1.7 or something like that. It's not two, it's occurring on a plane. So the DLA does not have a dimension of two but it has a dimension of something like 1.7, all right? And it's important because particles which have got a dimension of 1.2, if you imagined it made out of a metal, what is the conductivity going to be like, all right? So the answer is not obvious that it has to be the same as a wire. A one-dimensional wire or a two-dimensional plane will have a very different conductivity from a DLA, all right? Or if you ask, what is the area that is irrigated by this river? For a normal river, supposing you just imagine that you have a river of length, you know, the Nile is almost linear. So what is the area irrigated by the Nile? You imagine some distance from the Nile that you can irrigate and it goes all the way from one to the other. So the total area will be L into some number, all right? When the curve is winding all the way around, it turns out that the actual effective area increases a lot because you've got this fractal structure. So the basin of a river of length L actually goes as L to the three by two, okay? So the fractality of objects is important. And I mean, for other reasons, a lot of our physiology, we have a lot of fractals in our body. Partly, by the way, things have grown. Good examples of fractals are in the body. The lung is one good example, probably the surface of a brain. See if you take a piece of paper, this is obviously two-dimensional. But if I crumple this up, I can ask what is the dimension of the surface, right? I'll leave it as an exercise. No, this is actually a fun experiment to do, to find out what is the surface. You see, because now the surface is going in and out and in and out and so on. But the volume is very little. For some of us, it's very little. For others, it might be a lot. But the surface of this crumpled piece of paper is actually a very good example of a fractal, all right? Apart from making a nice demonstration. Big button, yeah? No, you leave a particle, it's fixed, all right? Now I've got another particle that's moving from infinity, it comes over here and it gets stuck. Then I have a third particle that's coming from somewhere, somewhere, and it will come and get stuck. It gets stuck, meaning I will not, now I've got a cluster of three particles, okay? Now I start with the fourth particle, then that comes here, this also gets stuck. So each time a particle comes over there, it gets, each time it touches, yeah, and then it's, you forget it, now you leave another particle, okay? This is a, I mean, this is one of the early studies of, see it's diffusion, and then you have aggregation. So we are looking at how particles aggregate, but they are limited by the diffusion, which means that we don't know exactly which part it's going to hit, but as it grows, when a particle comes and diffuses, the chances are that it's going to hit a particle on the edge, so this is how they grow. I mean, people have used these kinds of models for everything, stalactites. We've gone to Grotta Gigante, you'll notice that the bigger ones grow higher because it comes and hits there, okay? I mean, that's not the same model, but I'm just saying that growth models were for a while very interesting and important because, you know, why should the lung be a fractal? Because it maximizes the area within a small volume, okay? Our blood vesicles, the veins of our body, all right? They are also fractal, et cetera, et cetera. Okay, so how does one actually calculate the dimension of irregular objects? For regular objects, we saw that it was, you know, simple formula. For irregular objects, there is a mindless procedure, if you like. You cover the object with a box of size L and say you need N of L of these boxes. So if that is L, then you just look over there and count the number of boxes, reduce the size of the box and measure N and so on. So N of L goes as one over L to the power dimension. That's the formula over there, all right? So N, let me just put epsilon, just for the sake of that. This goes as one over epsilon to the power D, or taking logarithms, log N of E is equal to D log epsilon minus. So our D dimension is log N of E over log one by epsilon. And now we let the limit of epsilon go to zero. Okay, so this is your formula to calculate the fractal dimension of any object. Yeah, when you have a limited number of points, there are finite size corrections. There are, I mean, this is a game that has been played for the last 40 years. So there are ways of doing this. Typically, you may know that an object is fractal, but it's limited by the amount of data that you have. So there are finite, I'll show you some example of finite size corrections, okay? Also because things are not fractal down to scale zero, unlike the mathematical examples. Regardless of what I might think is the dimension of this piece of chalk, at some scale, as epsilon goes to zero, I've got nuclei. I mean, this is usually an example that, when you're talking about fractals, let me just very quickly do it. If I ask you to measure this at a scale, let's say over there, does this object look one, two, or three-dimensional? It looks one-dimensional over there, right? Now if I come a little closer and you can see that this is a nice object over here, what would you say the dimension is? Three, right? Okay, now I come a little closer over here and bring it so close. I'm not going to touch you, don't worry, or hit you. But if you look over there, all you really see is the surface, so it is two-dimensional. Okay, I'm giving you the answer that I want. Okay, and now Mateo, I shrunk the glass. They've all become really small and now they're going into this object. When you go inside over here, you've got this calcium carbonate all around you and this is again three-dimensional. Now you shrunk even more. It goes down to maybe one-dimensional because you've just got bonds moving around in all directions, so it's down to one-dimensional. Go even smaller. All you've got is atoms, right? So there you are again to zero-dimensions. So one, three, two, one, zero, et cetera tells you that this dimension that we imagine can really depend on the scale at which you are measuring things. So this is the unifying formula. It just says that as I go to epsilon going to zero, this is the thing, but I may not be able to go to epsilon going to zero. I may have to go to a finite cutoff. So you have power laws, right? Because now this is an example of a power law because it says that N of epsilon is going as epsilon to the minus D, right? So the idea is you take a map or if you want to find the coastline of Britain, there you put the number of boxes. Again, reduce the number of boxes, reduce the thing number of boxes and so on and so forth, and then you'll be able to tell what is the dimension of that object. All right? You're going to leave out any box that doesn't include a coastline, a part of the coastline. Yeah? So in the sort of mathematically most relevant formulas, you cover these with balls of arbitrary size and then you reduce the size of all the balls. So you don't really have to waste a lot of time doing it. So the definitions are mathematically are slightly different from the procedural ones that I'm just giving you. Okay? And once you've got this data, you know, N of L for a variety of Ls, you plot it on a log log paper and you get some number and you say that that's the number 1.76. Now, it turns out that fractal geometry is very useful in the analysis of a nonlinear dynamical systems. You know, when you look at strange, you know, the other day when we saw the Lorenz attractor, it has a fractal dimension of 2. something. The DLA over here is 1.7, right? You look at the attractor base and you look at the Mandelbrot set. Wherever you look, you find fractals, okay? There's even a book called Fractals Everywhere, right? So wherever your ferns, this, that, and the other, you look at it. Now, okay, so here is the last, I hope, example I'm going to show you of the Cantor Middle Third set. You start with a line, remove the Middle Third, remove that Third, remove that Third, and so on. And as you go on down to, you know, infinity, you don't get nothing. You'll get a bunch of points. And this bunch of points has a dimension of, you know, using the same procedure that we did, that the dimension is log 2 by log 3, which is less than 1. Part of the reason for bringing this is that, similar to the first example that I talked about, that is the circle map. The circle map at k equals 1. If you look at just all the rational points, they also form a set like this, a set of points. There, the dimension is 0.87. Here, I think the dimension is something like 0.6 or something. It doesn't matter, but these fractals appear everywhere. I'm not going to discuss this. So the basic idea is that fractals naturally have this kind of a scaling that you saw over here, that n number of boxes goes as epsilon to the minus d. And the number of physically interesting properties also show this kind of scaling. And what is interesting about this, or scale invariance over here, is that if I take measurements at two scales, so I'm thinking of some property over here, and x is the scale at which I'm measuring something. It could be the conductivity. It could be, as we will see, the number of avalanche, the number of neurons firing, or whatever. Let's say, cortical activity and so on. That's what people talk about. If it depends as a power law, then it has a unique property. That is, if I take a measurement at scale one and at scale two, then this property just depends on the ratio of the scales. So for example, if x1 is 2 times x2, then y1 is just equal to 2 to the power theta. If x1 is 100 and x2 is 200, or x1 is 10 and x2 is 20, or any similar ratio, that scaling will always be the same, regardless of which scale you're measuring it at. So because that ratio falls out, this is an interesting feature of such systems. Where does self-organized criticality come into all this? It's a natural question. If I've got wherever I look, I'm supposed to find fractals. So that tree that's outside the window over there, that's fractal. The river that is flowing into Mon Falcone, it's inside the mountain, but it's fractal. All the rocks around you, the mountains, if you look at the slopes, that's fractal. Clouds are fractal. If everything is fractal, what is making them fractal? That is, what is the physics of fractals? I.e., how come? I mean, it's one thing to say, well, it's not a circle, but it's a fractal of some kind. What is the process that gives rise to fractality? And in trying to answer this question of where do these power laws... I'm not going to worry about fractals so much, but let me just say that things go as power laws. I mean, this is the same question, but where do power laws come from? And this is an iconic picture from Buck, Tung and Beesonfeld. So in the middle 80s, Bach, Perbach, Kurt Beesonfeld and Chow Tung, they said, well, the idea of these power laws can be captured by a sand pile. Now, of course, you notice that there are many fractal objects over here. There is your mountain. It's not a cone, but fractal. The coastline is a fractal. And I think in the original picture, there's a cloud, which is also a fractal. So, question. So, Buck, Tung and Beesonfeld invented a model, a simple model, that would give rise to power laws. They said, I mean, there's a basic idea over here, saying that, look, if these power laws in nature are there, they cannot be too special. They cannot come about because something is tuning them to a critical point, because one place where physicists know that there are lots of power laws is in second order phase transitions. They know that at these continuous phase transitions, you have power laws depending on, you know, there are things at all scales and so on. So, maybe the universe is at a critical point, but this critical point is achieved without special tuning. Are there systems that will automatically reach a situation where you have a power law, but nothing special has been done to it other than some very simple rules? So, the model that they came up with is that of a sand pile, and they said that if you just take a... I'm not going to put sand, right? So, that was the only demonstration, right? Now, if I have a flat surface, I just add sand. I keep adding sand, and then at some point it reaches... sorry, at some point it will reach this shape, and the more sand I add, the shape doesn't change, right? What could happen is... So, here is your flat surface. I just keep on adding, and once I reach this particular shape, if I add more sand, that structure becomes unstable and sand just flows away. If you played with sand on a beach, you've noticed that unless you pack it in tightly and whatnot, but even so, you cannot make a sand pile which is like that. It has some natural shape. That natural shape will depend on the kind of sand, it will depend on the humidity, it will depend on certain features, but there is a natural shape that it will take, and that angle, the angle that the sand pile forms is called the angle of repose, right? And this is a very important kind of thing to study because the angle of repose tells you something about the stability of that sand pile. Okay. So, basically now what Bakhtang and Biesenfeld did was to make a model, and they made an automaton model, and the automaton model was the following. Space is one, two, three, whatever dimensions, right? But it is a lattice, right? So, you've got a lattice, the space is just a lattice, and at each location, you have one, two, three, four, how many units of sand, right? So, you've got space that is discreet, and you've got at various points, okay? So, you've got i, the integer i labeling your sites, right? And at each location, you've got h sub i as the number of sand particles, also an integer, yeah? I'm going to eventually get to a neuronal model, so little patience, please, right? Okay. So, now, okay, start out with this hypothesis that at any site, there can be any number of pieces of sand, all right? But we know that sand piles are intrinsically unstable if there is some instability that will happen. Either a site cannot take, imagine that you put an infinite number of sand particles on one side and the mass of each one of them is one, that site is infinite long, and everything just breaks, all right? So, I can come up with one instability which says that if the slope, that is to say, the slope is just the difference in height between two neighboring sites, all right? So, here, for example, both of them are the same, so the slope, z, i, is equal to zero. I'll just take the slope in one direction. From here to here, the difference is one, so z is equal to one. From here to here, the slope, I mean, the height is zero, the slope is equal to one. Over here, it is one, two, three, four, five, minus one, so here it is four, okay? So, I just take these slopes. Now, I can come up with a rule which is what they did. They said that, look, at each site, the slope has to be a finite quantity, it cannot be any old thing, and if the slope exceeds, let's say, if the slope at any particular site, i, is bigger than some critical slope, then the sand particles fall, and where do they fall? They fall, I mean, there are some conservation laws and all that, but let's just say that if the slope is too much, boom, you have this one coming down, and it reduces the slope, okay? So, something like what is depicted over here, if this is your stable configuration, and somehow you wipe out parts of it over here, then you can keep on adding until you reach the stable configuration, or if you just wiped out a lot of them over there, you can keep adding more, but if you exceed at a certain point, then it will just wipe out, yeah? All right, so these sandpile models are like domino models, like you've seen these YouTube videos with a million dominoes, and everything just falls one after the other, but this is basically the idea over here, that if you have one of the dominoes falling, that can make the next one fall, the next one fall, the next one fall, and so on, yeah? A sequence of such dominoes falling or sand falling or whatever is called an avalanche. We are interested in two things. I mean, we are interested in many things, but in this context, we are interested in two. One is once one domino or one sand particle or whatever, once one falls, how many will fall? What is the size of the avalanche, yeah? So the size of the avalanche is one characteristic. The other thing is, as you notice over here, once this avalanche is sort of topple, topple, topple, at this point also now topple, topple, topple, so the number of sites that are falling is different from how long the avalanche lasts. We'll see more examples, but the duration of an avalanche and the size of an avalanche are not the same. This is going to be important, at least I hope, when you all will try out neuronal models. The avalanche over here is just a sequence of neurons that fire because one is fired, all right? And the duration is how long the whole thing lasts. That tells you something about the connectivity of unstable sites or sites that are verging on unstable. So we'll come back to that if it's not completely clear. So here is a one-dimensional sand pile. So in this one-dimensional sand pile, space is discreet, i goes from 1, 2, 3, etc. At each site i, the height of the sand is 8 sub i. If I add a grain, just randomly throw in a grain, it lands on site i, the height increases by 1, yeah? And the slope at site i goes to height at that site minus the height at the neighboring site. I'm just looking at the slope in one direction, yeah? So now instead of looking at the height variable, I can look at the slope variable. The slope variable z sub i is just the height minus the height of the site plus 1. So the process of adding sand is to add one more to this particular site. And that means that adding sand is equivalent to this rule that the slope at site i goes to the slope plus 1. And because the slope at site i minus 1 also depends on the height at i, the slope at site i minus 1 goes to i minus 1 minus 1. I've added this thing here. The slope on one side decreases and the other side increases. I'm just looking at the slope in one direction, you know, as this variable, height minus the height of the forward site. Now if my rule is that the slope at any point exceeds a critical value, then a toppling occurs. So what happens at that toppling? One particle has to move to its neighbor. So z i will go, just work that out, goes to z i minus 2. And z i plus or minus 1 goes to z i plus or minus 1 plus 1. So that's sort of demonstrated over here that once this falls, then this slope increases by 1, this slope increases by 1, and the slope over here goes down by 2. Now you notice that, you know, there is a conservation over here because the slope at one side decreases by 2 and the slope at 1. Two other sides increases by 1. So there is a funny kind of a conservation over here. Don't take all this business of slopes, et cetera, too literally because these models are just designed to illustrate a particular point. And, you know, you can ask a variety of questions. Atlea and I will ask some of them. So basically I want to describe a one-dimensional automaton model for a sand pile. Okay? Now in this one-dimensional sand pile, you can imagine what will be the, what's going to be, what's going to happen as, sorry, I just keep on adding sand, all right? Here I've drawn it in terms of heights, but I can draw it in terms of slopes also. When will nothing happen? When will I reach a stable state? I'll reach a stable state if every slope is my critical slope because if my, if every site has got Z critical, right? Nothing's going to happen. No site is going to fall. It's only when I add a particle and then I increase the slope at a given point, then things are going to happen, all right? So with a little algebra, you can convince yourself that the stable sand pile in one dimension will have slope Z of C. I add one particle over here that increases the slope at this point. That's going to fall, that's going to fall and fall and fall and fall and fall. It goes off the edge. So these models have got open boundary conditions at least at the edge and the one-dimensional sand pile is almost trivial. I mean it's not quite trivial, but I'm not going to deal with it anymore. I just want to show this as an illustrative point that once I reach the stable, so there is one critical state and the critical state is that all the slopes are equal and once I add something, it moves on and goes off, returning me to the critical state, right? So the critical state is quote-unquote an attractor and over here it has a very simple form. The critical state is one where at every location I have exactly slope equals to the critical slope. Now in more than one dimension, okay, so adding sand we will finally get, so this is your minimally stable state or the critical state, right? This is just what I said. In two dimensions, life gets a little more interesting, okay? Because now you've got four directions, I mean this is just a two-dimensional square lattice. You've got four dimensions, sorry, you've got four neighbors and so one can come up with a variety of rules that will, and what Bach-Tang and Wiesenfeld said was that, don't ask me whether z is a slope or a height or whatever, okay? z is some variable. Every location is now indexed by two of them, i, j, right? So adding sand is equivalent to raising z goes to z plus one. And now if I have a critical slope or critical variable, whatever, critical z, z sub c, I can say that when z i, j exceeds z c, then z i, j goes to z i, j minus the number of neighbors that it has, democratic, and I want to just drop one to the four sides, right? And so falling just means that z just gets wiped out or rather it reduces by four and so it's gone below, you know, even if it was at, when it was z, above z c, it will fall down to z i, j minus four. The neighbors i plus or minus one pick up one, j plus or minus one also pick up one, all right? And if it comes to the edge, it just falls off. Some of you surely seen this back tongue reason failed model, yeah? Okay, but this is not so difficult to understand, right? I mean, I've just got, at every location, I've got some variable. And you can see that if I don't want things to go negative, the minimum value for z c should be four, right? You can work out, I mean, these are all very simple things that, you know, z at this height or this variable at any given site should be less than or equal to four. When it is four, it will fall off. So if I'm now just thinking in terms of configurations, let's just say that I've, now, sorry, this is a configuration where nothing is going to happen. Why? Because every site is under, and I've said that z c is equal to four. So supposing I add, let me just add, make that three, all right? There's still nothing is going to happen because this is what we'd call a stable state. When I add one here, I get four. I mean, supposing randomly now I throw in a particle and it is here at four, four is going to wipe out and give me zero, and this is going to go to three, this is going to go to three, and this is going to go to three. One fell on all four sites, right? Now, at the next instance, let me say I add one over here. So that goes to zero, this goes to four, this goes to two, and this goes to one. But this is not stable because this is four, that will go to zero, that will go to four, this will go to one. Yeah? Yeah, yeah, they've fallen off the edge. I will eventually go to an infinite lattice, et cetera, et cetera, you know. Okay, but now what I want you to see is the avalanche. When I added here, this became unstable, when this toppled, this becomes unstable, that goes to zero, that goes to two, that goes to two, this goes to one. And again, we are. Okay, so two dimensions is certainly more interesting than one, in the sense that the critical state, you can ask, what is the critical state? Is it the lattice with, you know, everything three everywhere? All right? And you can see that the critical state is not as simple as everything equal to zc minus one, you know, because I put the equality sign there, right? Yeah? Okay, just say that it's a minimally stable state. That is, the moment I perturb that minimally stable state, anyhow, I'm going to, you know, I'm going to just have avalanches of all sizes. You'll see, just wait on with me. You'll see that there are some configurations that are simply not possible in this situation in the long term. See, supposing I start with, let's just take a two by two case, right? Supposing I start with empty everywhere and I keep adding, and keep adding, let's say finally I keep adding here to four, then this is going to topple, and I'm going to get one, one, zero. Right? Okay, now this is stable. But it is not part, it could be part of the critical state, I don't know. We'll see that there is, you know, okay, let me define a stable state as a state where nothing happens. Right? And the critical state is not, there's not a single critical state in two dimensions. The set of all states that appear in the critical configuration can be enumerated. Right? And it turns out to be less than the number of possible states. This is one configuration. There's another, I mean, if I, okay, so this is, you saw us go through a variety of configurations. So what are the configurations that will appear in the critical state? This is in the minimally stable state. No, no, no. I'm going to see the point about all these attractors is that imagine the critical state to be the following kind of state which is not itself unstable, but any small perturbation will induce activity. Okay, that is what I mean. Yeah? Yeah? Right. But what is crucial about these models is that there is driving, okay, driving namely the addition of sand particles. It's also important that there is dissipation. That stuff just goes off to infinity. But I'm describing work of 1987. All right? So we have moved away from, a long way from that. There are articles that I've given you for reading. All right. Now, see the point is that the sort of this minimally stable state is not just one configuration. It turns out to be a lot of configurations. And once you start with this two-dimensional lattice, it turns out that these sand piles actually have, you know, they have avalanches which are interesting in many ways. We saw already a small avalanche, right? We started here, then this side toppled, this side toppled, and so on. So we saw a small avalanche of size three, right? It also had duration three because it took three time steps, but this need not be the case. Over here, there was an avalanche of size one, okay, and so on. So the idea is that you start with a flat lattice, keep adding sand, just keep on evolving, and after you have reached, I mean, first of all, we assume that there is an attractor. This attractor is this minimally stable state. It can be many, many configurations, right? In fact, I'll show you an example where it is some exponential number of configurations, right? And this minimally stable state, right, itself is unstable with respect to small fluctuations, and in the case of two dimensions, it cannot be an attractor of the dynamics, right? You can't somehow magically reach that from any other state. Some of this requires a lot more. This is not a course on SOC, right? Okay, so this is a picture taken from Bakhtang and Wiesenfeld, and here you have an image of many avalanches, right? And these avalanches are, as you can see, there are some which involve, this is now four sites. This is a single site, one site toppled and everything stopped. Here, three of them toppled, one after the other, and everything stopped. Maybe 15 toppled, maybe 100, whatever. All right? Yeah? They're there? Yes. No. If this is just a site in the middle of this avalanche, that didn't topple. No. No. In one dimension, they need to be closed, and in the example I'm going to show you next, it's going to be closed, but not in this, which is why this is not solvable, in the example I'll show you is solvable. So they don't have to be compact and connected and whatever. But what I want you to see is that there's another empty site in the middle. What I basically would like you to see is that you have avalanches of all sizes. And if you do these simulations, you can ask the question, what is the distribution of avalanche sizes? I'll just keep on adding more and more and more and more. What do I get? Okay. Now, this is from a recent paper. No, not recent. This is a paper in 1996 by Arrena et al. Now, you can see that you finally get, for large enough, what they did was to look at 50 by 50, 100 by 100, and 200 by 200. And they asked what is the probability of having an avalanche of size S? Okay, so P of S is the probability of having an avalanche of size S in a system of size L. Okay, L cross L is the number of sides. Notice that a site can topple more than once inside an avalanche because things are, you're getting matter from both sides or four sides. You can just build up more and more, and so a given site can topple more than once, and sometimes it's interesting to ask that question also, how often do sites topple? All right? So you find that this probability decreases with size S. All right? And there is a power, and the power is some 2.7 or something like that. The precise power is not important over here. When Bakhtang and Wiesenfeld did their work, I mean, it was an assertion that if you have a sandpile, you will naturally have power laws, and if you have power laws, this is, in a sense, a very natural way in which power laws come. Namely, you have a system of connected elements. These are all lattice points. One of them goes unstable. That instability is transmitted to the next, and the next, and the next until it stops. Okay? So the paradigm was more important than the precise example. How are we thinking of events of a certain size? Basically because of instability. And why do you have instability? Because a given site cannot have more than so much height. We said that there is a critical height, or a critical slope, or a critical something. All right? Okay. Yeah, yeah. Okay, so the assertion is that this minimally stable state is an attractor of this dynamics. All right? Continuous models also. I mean, I want you to get the idea. You keep evolving the system till you are in the attractor. Once you are in the attractor, you perturb. How long does that perturbation last? And so on and so forth. Now, a lot of it got, or at least conceptually, many things got clarified with this, is the directed version of the sandpile. Okay? So it's again a two-dimensional sandpile, but there is a preferred direction. Sand falls down, it doesn't fall up. I mean, yeah? Okay. So when you do that, just imagine now that you have a cylinder. All right? And you have these lattice sites. And at each point, z ij is equal to and it's either 0 or 1. And z critical is equal to 2. Okay? I mean, let's say z critical is equal to 1. And if z ij is greater than z critical, then z ij goes to z ij minus 2. All right? And z of the forward neighbors goes to z of the forward neighbors plus 1. So it's best to think in terms of lattices like this. Okay? So at each of these locations now, I've got, let's say z is, let's just say I've got 1, 0, 1, 1, 0, 1, 0, 1, 1, 0, etc., etc. Okay? So this is a stable state because nothing is going to happen over here. Everything is just poised at. Okay? And the difference in this rule is now, if I add a particle of sand over here, this becomes 2. 2 will automatically go to 0. This will become 2. This will become 1. Yeah? Now this is critical. So this will become 2. And this will become 2. And this will become 0. This will become 0. This will become 0. This will, whatever it is, add 1, add 2 over here and so on. Yeah? Really it has to do with geometry. Here you have got only two sites onto which you can go. So the critical slope that is relevant over here is just the number of neighbors onto which you can fall. That model has also been done. I'm now describing, I have reached as far as 1989. Okay? I'm not going to go much beyond that. But the beauty of this particular model is that it's exactly solvable. And you can actually see what this is. So you see, if I look at the connected, the sites that have toppled, see this was one site. This was one site. These were two sites over here. And this will also have to topple. Whatever this number was, even if it goes to 3, this will have to topple. So unlike the other case, Tomasso, there can be no holes. If all these sites have toppled, basically if this site has toppled and that site has toppled, this site has also got to topple. And that actually solves the model. The reason it solves the model is that if this is what a connected, this is the size of an avalanche. The duration of the avalanche is just the number of steps because as time moves up by one, you're only going in one direction. You're not going backwards. So the duration of an avalanche is just that length. And this side over here is a random walk. This side over here is a random walk. And this is the point where the two random walks meet. That tells you that this duration of this, you know, the length of that particular... Okay, so this is the problem of annihilating random walks. And not surprisingly, you will find that there's some number like half over there. The number of avalanches of duration t, basically. So the number of avalanches of duration mass is just the number of sites. This goes as m to the 1 by 3. And the number of avalanches of duration tau goes as tau to the half. So this actually, this solves this entire problem. I mean, there's a lot more to mathematics. By putting in a direction, actually you've taken this problem in two dimensions and made it quasi one-dimensional. Yeah? You see, the thing is that because this is, you know, just zero ones. All right? The minimally stable state is one where each site is zero or one equally probable. Okay? One can prove that. Right? So you've got just random zero ones. And what you can see over here is that supposing this site has become two. This is one and that is one. Let's just assume so. Right? Then this topples giving me zero, but this becomes a part of the avalanche. This becomes two. And let's assume that this is one. Then this will topple giving me zero. And I have to add one here. So this becomes two. This topples because that has become two. And this is also part of the avalanche. And therefore this will go to zero. And this will also now become three. So this is also going to topple regardless of what it was. If it was zero, it would be two. It will topple. If it's one, it becomes three. It will also topple. All right? This boundary is going to topple only if this one is one. But this is one randomly. It is one with probability half. So this boundary that you find over here is a random walk. Because the boundary of the avalanche is a random walk. Because it will topple if it has got probability one, zero, et cetera. This model was actually invented as I found out much later after we did this work. This was invented by a geologist called Scheideger, who wanted to find out how long rivers will flow. So he was trying to model how the flow of a river through the valleys, it will either get stopped or move on dependent. And so the model that he made was a Monte Carlo model of zero ones. So this is also sometimes called the Scheideger River network. So basically I'm just saying that this left boundary is a random walk. The right boundary is a random walk. And how long will it be before two random walks that start from the same point go and meet that is just given as t to the minus half from some other work? I mean, I haven't proved it over here, but just saying that if you go and look at the theory of random walks, annihilating random walks will last for t to the minus half. So that gives you that exponent. And a little more mathematics gives you one third. And the critical state over here is the state where every lattice point is either zero or one with equal probability. So the total number of states that you have are two to the L squared. If you just think of this as L cross L, then the total number of critical states are that. So this, I mean, this is the first of the solvable models that was made for self-organized criticality. And the point of bringing it up over here, the point of bringing it up over here is that this is not a numerical artifact. Like, you know, if you were just simulating lattices with sand and piles and this and that, you get some number. But there is at least one model for which we know the power laws are exact. No, there's no phase transition as such. What you do have is that if you start with an empty lattice, you will just go to one. The empty lattice is also part of that, you know, site zero, one with equal probability. But eventually you will go to some state in which activity will take place. Then you will find that, you know, then you just find these exponents, all right? And this is actually more, this picture is really that kind of a sand pile. As it topples, it only, you see there, the sand is falling in one direction. It's not falling in four directions. So as you topple, you just topple the forward neighbors and so on. Okay. So Buck Wood has in many places gone and said that SOC is really the physics of fractals. The reason you get so many fractals is in nature is that in many natural systems over a long period of time, we have now reached a situation where there is slow driving of the system. See, the slow time scale is really the time scale of just adding new pieces of sand onto the system, right? Because nothing is telling you how long it is between these sand piles. And the slow driving and this sort of dissipation that goes on, although you find SOC in conservative systems also, that basically gives you these kinds of power laws, right? And this occurs without any agency other than slow driving, right? And since this model was invented in 1986, 1987, it has been, I mean, the idea has been applied to mass extinctions in biology. It has been applied to the stock market. It has been applied to earthquakes, to solar flares, to all sorts of areas, and now in particular to neuroscience. In earthquakes, the idea is that you have two crusts which are moving in opposition to one another. They get stuck for different lengths of time, and then they release, when they release, there is an earthquake of some size, and then it gets stuck again for a while, and all of us come from earthquake-prone countries. So we know that once you have an earthquake, it's very unlikely to have an earthquake immediately thereafter. Not saying it's impossible, but it doesn't happen. So there is a power law distribution in sizes of earthquakes also, which is why the Gutenberg Richter scale is logarithmic. That's an evidence of a power law, right? So small earthquakes are much more likely than others. I mean, all those numbers are rescaled and whatnot, so that we know that earthquake of size 10 is what? 10 times bigger than an earthquake of size 9, earthquake of size 10 wipes all of us out, but an earthquake of size 6 or 7 is, 6 is 10 times more energetic than, releases 10 times more energy than, now releases more energy in these models could be the number of blocks that have moved. I mean, this itself is a well-studied subject, and I've given you some reference to where you might find this. Now, the reason for introducing this in this set of lectures, it's actually solvable in all dimensions. There's an upper critical dimension, which is 3. The stable states will always come down to having sites occupied with Zc, Zc minus 1. That's it, just because of the mathematics. So it's easy to just take Zc equals 2 in this case. If I take a triangular lattice, also directed, here Zc will be equal to 2, and the sites will be 0, 1, 2. If I take a partially directed lattice, that is, if I take the square lattice, but only allow things to fall, that is, I say that it cannot fall upwards, stuff only falls down, the model is the same, the exponents are the same. So the upper critical dimension for this model turns out to be 3. See the mass over here, so the definition of time is just, this is row 0, and this is row n, and the time is just n. Now, on each row, you can see more than one site has toppled, right? Add them all up, that's the mass, yeah? The number of connected sites that have toppled are one avalanche. Okay, yeah, yeah, yeah, that's right. I'm just going to come to that in a moment. So when you apply it to different physical situations, you've got to imagine now what is the, what is the critical point over here, all right? You can think in terms of, you've seen these tricks that bartenders do where they put wine glass upon wine glass upon wine glass all the way to the top, and then you pour wine at the thing, it fills up to the top of its wine glass on top that flows down to the next one, that fills up to the next one, et cetera. And then when you add more to the top of that, that just flows all the way down. So the critical thing over there would be the capacity, all right? In solar flares, for example, because these are formed by vortices in the sun, all right, there is a sudden burst of energy at a particular point, and then that dissipates out, creates more solar flares, and so on and so forth. So, I mean, each of these, systems has gone, and in the brain system, which is the reason why I went through all this, is that, and since I'm going to give you these notes, you can just read them at your leisure, self-organized criticality has been proposed as a universal mechanism for scale-free dynamics in many complex systems, and possibly in the brain. So there are network models, et cetera, and I'm going to just describe one of them to show you the similarity or the difference between sandpile models, all right? Of course, there are significant differences between classical SOC, which is what I've talked about, and the brain. In SOC models, conservation law fixes the interaction between sites. One site topples, four of them pick up, in two dimensions, four pick up. That is what is Z sub C, whether it is two or four, et cetera. This is the conservation law, you know? And in neuroscience, connection strengths are ever-changing because this is something that you all know a little better than I do, and therefore, incorporating biologically possible interactions is one of the biggest challenges in making SOC models, but this is an area which people find interesting, all right? The same, the language, neuronal avalanches in these neocortical circuits. Again, there's some experiments that suggest that there are power laws when you look at these avalanches. Okay, now, the model that I mentioned is the following. It's by Kinnuchi and Kopeli. All right. Okay, the idea is that optimal information processing is to be found near phase transitions, that is on the edge, just as you are having a, you know, when you're in the threshold of a phase transition, there could be interesting things happen. Okay? So, I mean, there are papers written called Computation at the Edge of Chaos and so on and so forth, okay? The model that they study is a network of excitable elements, and I know that people have studied networks of Fissue, Nagumo, and so on and so forth. These are all also models of excitable systems, but the network of excitable elements has a particular form, which is very similar to the sand pile, and that is this, okay? So, in this Kinnuchi-Kopeli model, each, okay, first of all, the network is, it could be Air-Dash-Rainy, random networks, or whatever. That's not so much the issue. The model itself is an automaton model, but unlike the Kinnuchi-Kopeli model, the automaton model, but unlike the, we saw that the automatons had states zero, one, et cetera, they have an M-state automaton, and it's like this. Zero is a resting state, all right? And one is the active state. Once you reach the active state, that neuron fires, and when it fires, it just goes zooming down to minus M-2, okay? Again, these are all just integer numbers over here. So, zero goes to one with some probability, so there's excitation with some probability, but once you're in one, you immediately go to M-2. M-2 then slowly increases by one, by one, by one, by one, et cetera, until it reaches zero, and then the process starts again, okay? Now, just read the rules. External driving or noise or stimulus can cause a silent node, that is zero, to become active. This zero-to-one transition occurs with probabilities, so and so, where R is the stimulus rate. At each time step, a silent node is excited if it receives a stimulus from one of its active neighbors, all right? So once you reach, you know, you can get, you can go from zero to one if it receives a stimulus from one of its neighbors, and this occurs with some probability, A-I-J, which is the weight of the connection. So I've got this network, I've got all these Kenuchi-Copeli neurons, which are, you know, just some automata, and spontaneous transitions can be of two kinds, namely, an excited state becomes refractory. So this process going from minus M minus 2 all the way up till zero is like the refractory period of a given neuron, all right? So a neuron becomes excited, it becomes refractory, it takes a certain amount of time to come back, and some of these processes happen with various probabilities, all right? I don't know whether I have any, oh yeah, I have something more. And what they showed was that if you simulate a big network of these, of such neurons, then you have critical behavior. I'm going to let you read the paper if it is of interest, but I mean this is one area where potentially we could have a lot of interesting applications of ideas of criticality, self-organized criticality to the two neuronal systems. And the important part as far as dynamics is concerned is that there is nonlinearity. You know, this entire process of going from, you know, there to here, et cetera. There is an attractor. You come to a critical state, and that critical state is actually the dynamic shown in red. At that time, these avalanches, the response curves have got power laws, and many of these things have been measured experimentally. This depends on a lot of things, because the brain is not like the simple lattice models of self-organized criticality. As I said, and I will say again, one of the reasons is that the application of this kind of collective behavior, this, I mean, the whole point of this is that it's only collectivity that gives you these power laws. If you have a single neuron, it's just doing whatever it does. But this influences the neighbor, the neighbor, the neighbor, and you get power law, right? Okay, so stop for now, and there will be a repeat performance of this show in the afternoon, in the matinee.