 Let me start over again. All right. Start. Okay. Okay. I'll start over. Okay. So we're going to talk about the dynamics of the one-dimensional ising model. And as I wrote this morning, the flip rate of the one-dimensional ising model is given by this quantity. So this is the correct form for one dimension. And in general dimension, this is equal to one-half one minus gamma sigma i hyperbolic tanh. I'm sorry. There's no gamma anymore. There's just hyperbolic. It's minus sigma i hyperbolic tanh of beta hi, where hi is the local field. So this is d larger than or equal to two. And because of the presence of the hyperbolic tanh is what makes the analytic progress or analytic studies of the dynamics of the ising model in higher than two dimensions or higher than one dimension, so difficult. But in one dimension, this simple form allows us to make a lot of analytical progress. So let me just remind you a few things that we derived this morning. So first of all, there's equation motion for the mean spin, S i dot. And so when a spin flips, and so this is nothing more than the thermal average is sigma i dot. And so how does this when, so when we write down the rate equation for the average spin, so when a spin flips, it changes by minus twice its value. So the change in the spin value is minus two sigma i. And the rate at which it flips is wi, and that's one over the time. And so we have this simple dynamical equation for the evolution of the spin. And when we work this out, you know, using this form of the flip rate, we end up with a very simple equation. So, okay, let me just maybe go through it again. So minus two sigma i, well, the one half cancels the two here. So let me not bother with that. And then I have minus sigma i from the first term. And then I have sigma i times w. And so sigma i times sigma i gives me a one. And so I'm going to get minus gamma over two. And then I just have sigma i minus one plus sigma i plus one, thermal average. And so what we end up with then is minus s i, this is plus sign, plus gamma over two s i minus one plus s i plus one. And so this kind of looks like the diffusion equation and it kind of is a diffusion equation in disguise. And we were able to write down the solution already in one dimension. For the case where s i at t equals zero is equal to delta i zero s i of t is equal to i sub i of two gamma t e to the minus two gamma t. And this has asymptotic behavior. So s i of t asymptotically scales as e to the minus, whoops, I have something, my memory isn't quite right here, sorry. This is e to the minus gamma t. And yeah, it's just gamma, not two, no two. So anyways, the asymptotic behavior, this is one minus gamma t for the temperature positive and it scales as one over two pi gamma t for the temperature equal to zero. So as I mentioned this morning, it's like you get nothing because you just say that if you start with one spin plus and everybody else, you say the average spin is zero, then this average spin relaxes away. And this is actually more dramatically shown by looking at the behavior of the magnetization. So let's look at m dot. So m dot is going to be the summation for all sites i, s i dot divided by n. And also just, yeah, so if we do that, then what we're going to have here, so when I sum up over all sites, so here I'll get minus magnetization. Here I'm going to get minus magnetization over two, another magnetization over two. And so what I'm going to get here is magnetization minus one minus gamma because I have a minus one from this term, I have a gamma over two from that term, a gamma over two from that term. So this is what I get. So the magnetization just decays away to zero. So the answer here is that m of t scales as e to the minus one minus gamma of t for positive temperature and it's equal to constant. So maybe put an exclamation point here for t equals zero. We see that when t is equal to zero, gamma is equal to one, so we get m dot equals zero. Magnetization is conserved. And so in some sense, we're learning almost nothing about the behavior of the system by focusing on the average magnetization, I mean the average spin or the average magnetization. So it turns out that if we want to get a more detailed and deep understanding of the dynamics of the system, it's necessary to look at the two spin correlation function. And the point is that this two spin correlation function is the natural way of trying to probe the microscopic behavior of the system. So let us now focus on the two spin correlation function. And the way it's normally defined is the following. So I define g i j. That's equal to the thermal average of s i s j. And so this is the natural way of trying to characterize the system because it's telling you if I have two spins that are some distance apart, are they correlated or are they not correlated? If they are correlated in some way, then we expect there's some amount of ferromagnetic order that is propagated through the system. But if this thermal average is close to zero, then it means that if I'm a spin up and somebody at the back of the room or someone on the fourth floor here is also the probability that you're going to be spin up is uncorrelated with me, then that means that somehow ferromagnetic order hasn't transmitted through the system. So this is what we want to focus on is the behavior of the two spin correlation function. Okay, and so if we're dealing with a one, and since we're dealing with a one-dimensional Ising model, so another way I can write this, I could write this as g i i plus k, for example. I look at two spins which are a distance k apart. There's a thermal average of s i s i plus k. So in some sense, there's only one index here, k. So the thing is that when I average over all spins, the position of the starting spin is irrelevant, and you just look at two spins that are a distance k apart, you look at their correlation, and you sum up over all pairs in the system, and that's what this correlation function is looking like. Now it turns out there's one very important correlation function, which is the near-neighbor correlation function. So let's look at g i i plus one, which is equal to s i s i plus one, a near-neighbor correlation function. And because now the index i is irrelevant, I'm going to define this for everything else that follows as g one, the near-neighbor correlation function. And the near-neighbor correlation function has a very simple physical interpretation because what is g one? This is equal to, well, it is the probability that the spins are, or this, times plus one. So if I do the thermal average, there's only two, there's four possibilities. Both spins are up or both spins are down, in which case I will get one for the correlation function. And then I have plus, the probability of the spins misaligned, so up, down, or down, up, times minus one. And now here comes a geometric interpretation, this correlation function. Wherever there is a pair of misaligned spins, let me define there to be like a sort of a fictitious particle that, can you guys see this? The blue, does that show up okay? I mean, I should have read. There's no one saying anything, so I'm hoping people... Not exactly. You can't see it, okay. All right, so I'll just do it with white. So if I have two spins which are misaligned, I just say that wherever there is a misaligned pair of spins, there's a fictitious particle that lies between them. Similarly here, a fictitious particle between these two guys. So if I think of these fictitious particles as living on the lattice, then I can ask like, what is the density of these particles on the lattice? So the thing is that if the spins are misaligned, that corresponds to a particle being there, and so that corresponds to density of rho. But with a minus one out in front, this corresponds to no particle between the two spins. So that's what density... that happens with probability one minus rho with a plus one. And so we can think of... G1 is related to the density of domain walls by this very simple equation. So if the... if there's domain walls everywhere, I mean, if there's no domain walls, then the correlation function is one, which means all the spins are perfectly aligned, whereas that if there's domain walls everywhere, rho is equal to one, then the correlation function is minus one, which means that every single spin is perfectly correlated. It's oppositely oriented to its nearest neighbor. So this connection between domain wall density, so rho is domain wall density, or another way I can write rho is equal to one minus G1 divided by two. So these two equations turn out to be very helpful because, you know, it provides now a simple geometrical way of thinking of what's happening with correlations. Instead of thinking about the spins, one looks at the elementary excitations, which are the domain walls. And in all kinds of problems in condensed matter physics, it always behooves one to try and identify the elemental excitations because the description of the system is much simpler that way. Like the classic example that is superconductivity, in which, you know, you have like a Fermi-C, you have your non-interacting electrons, there's this effective electron-phonal interaction that gives rise to a weak attraction between electrons, which make Cooper pairs, and the Cooper pairs are the elemental excitations. So you try and deal with a many-body wave function, you just, there's nothing you can do with it. It's complicated. But once you deal with these Cooper pairs, then you can start writing equations of motion for their dynamics and understand superconductivity in a nice way. Similarly here, these domain walls provide a nice geometric way of actually characterizing the dynamics of the one-dimensional Ising model. And I'm going to return to that momentarily. But now what I want to do is now having introduced the correlation function, I want to pursue the same program as previously, which is let's write down the equation of motion for the two-spin correlation function. And what we're going to see, just to preview what's going to happen, is we're going to get almost the same equations for the mean spin, both one little twist that makes the dynamics actually non-trivial. So let's now look at the equation, g i j dot. And so how does this change as a function of time? So again, this is going to be nothing more than the time derivative of sigma i sigma j dot. But now how does, so how can things change? Well, if either of these spins changes, then the correlation function changes by twice its value. Just the same way when I change a spin, it changes by minus twice its value. So this is nothing more than two sigma i sigma j. But now the rate at which this thing can change, so again, the correlation function changes by minus twice its value. And now I have to put in the rate at which it changes. And how can it change either spin i flips or spin j flips? So here I'm going to have w i plus w j. So that's what we have to compute. So let's do this. And again, it's kind of a pleasant calculation once you're used to playing around with the algebra. So we're going to have sigma i sigma j. And so w i, so it's going to be one half one minus gamma over two sigma i sigma i minus one plus sigma i plus one. So that is the flip rate at site i. And then I have plus one half, we're not closing the bracket yet, plus one half one minus gamma over two sigma j, because now it's a flip rate at site j, sigma j minus one plus sigma j plus one. And now we close the square bracket and then we have average value. Okay, so let's play with this. So notice that there's a term here which involves the one. So we have one half plus one half, which is one. And then we have minus two sigma i j, which is just minus two g i j. So there's one term here which is minus two g i j. And now we have the other terms here. So here I want to take sigma i sigma j. So sigma i, the sigma i gives me one. And then I have sigma j with sigma i minus one plus sigma i plus one. And so what I'm going to get here is plus, and now there is a one half that cancels the two here, but now there's a one half, so there's plus gamma over two. And now, again, the sigma i squared is one. Then I have sigma i minus one j sigma i plus one j. So I'm going to have g i minus one j plus g i plus one j. And from the second term, well I already took care of this, and here I have more or less the same game. Now the sigma j is annihilated and then I have sigma i times sigma j minus one plus sigma j plus one. So I'm going to write here g i j minus one plus g i j plus one. Final step is I'm going to assume translational invariance. Namely, I expect that the correlation function depends only on the distance between the two spins but not on the actual location in the system. So translational invariance means that g i j is really just some function of g of i minus j, the absolute value of the difference, and I'll define this thing to be g of n. So n is a distance between site i and site j. And so now the equation for g n dot, g n dot is equal to minus 2 g n. But now if we look here, this is one distance bigger than before. This is one, I forgot a one here, i plus one j. So this is, you know, i minus one is, the distance between j and i minus one is n plus one. This distance is n minus one. This distance is n minus one. This distance is n plus one. So I get two identical terms here and it's going to get plus gamma, g n plus one plus g n minus one. So it's exactly the same equation as for the mean spin except for an overall factor of two. So you might say, well, I've got nothing from this, but in fact we do have something from this because this has to be supplemented by a boundary condition. Here the boundary condition is that g zero, the self-correlation is necessarily one at all times. And that boundary condition is crucial in why the dynamics of the two-spin correlation function is different than that of the average spin where there was no boundary condition on the equation. And then the only other thing that we need is g n at t equals zero. So this is we supply it, user supplied. And the conventional case is to imagine looking at an uncorrelated initial system and asking how do correlations develop as a function of time? Okay, so it turns out that the solution to this recursion formula with this boundary condition and some random initial conditions is actually a rather hard problem and I'm not going to solve it. And in fact now I can tell, again, I wish there was students here to tell a story. Let me tell a story to use students out here, which is that there's a very famous paper by Roy Glauber, I think it's 1965 and it's the dynamics of the one-dimensional Ising model. And for people who work in non-equilibrium spin systems, everybody talks of Glauber model this, Glauber model that, Glauber, Glauber, Glauber. And so, you know, he was my hero because he was just, you know, and the paper is so beautiful. And so, you know, our physics department at Boston University hosted a yearly colloquium for Nobel Prize winners who always have like a fancy dinner afterwards with famous people arriving. And so, you know, we had one of those dinners and I'm dressed up in a suit and I'm looking like a person rather than like a slob. And so I sit down at the table and this elderly guy sits down besides me and so I said, well, hi, my name's Sid Redner. I guess I was the chairman of the department at the time. So I said, I'm the chairman of the department. And he says, hi, my name's Roy Glauber. And I go, Roy Glauber, you're my hero. Oh my God, you know, it's like, can I get your autograph? He says, what's the big deal? I think we're on the one dimensionalizing model. Oh, that little thing? Just like, this is just some side project for him. But you know, that was just kind of an amusing story. Anyways, he was a cool guy. I really enjoyed meeting him. And he thought I was like, you know, I don't know what he thought of me. But I was kind of like, I was like a teenager meeting a hero or meeting like, I don't know, if I met Beyonce or something like that. Anyways, so the discrete solution is difficult. So what I want to do now is I want to solve this in the continuous limit. So let's talk about the continuum solution. And part of the reason of doing the continuum solution, as I'm going to try and impress upon you, is that the continuum solution is much simpler than the discrete solution. That's number one. Number two, it contains all the same physics. So why work so hard to get the recursion formula is just right when you can just do the continuum solution. So the continuum solution is the following equation. We want to solve the continuous analog of this thing here. And also, again, to make my life simple, and because it's the most interesting case, let's focus the attention on the case of T equals 0, which corresponds to gamma equals 1. And so in that case, this thing just becomes exactly the discrete second derivative. And so in the continuous limit, I would just have to solve dg by dt is equal to d second g by dx squared. And then I have to supplement this with the condition that g of 0t is always equal to 1. And let me choose the initial condition g of xt equals 0 is equal to 0. So now my user-supplied initial conditions, I'm starting with an initially uncorrelated system. My spins are randomly aligned. And I let these correlations develop. And we know that at zero temperature, the one-dimensionalizing model does exhibit spontaneous magnetization. So if I start with an uncorrelated system, spontaneous magnetization is going to develop. And we want to ask, like, how does it develop? So to solve this problem, I'm going to actually solve a different problem, which is almost the same as this one. Let's consider the following quantity, c xt was equal to 1 minus g. And you'll see why I'm going to choose that in just a moment. So if I look at this auxiliary problem, then first of all, because there's only a difference of a constant, this satisfies dc dt is equal to d second c by the x squared. But this is supplemented by the conditions that c of 0 and t is equal to 0. And c of x and t equals 0 is equal to 1. So physically, what I'm arguing is that, well, first of all, these two problems are identical. But physically, the way I can think about this is that this is like a concentration field in one dimension. I start with a concentration equal to 1 for all positive x, an absorbing boundary condition x equals 0. And so we're just dealing with a problem that I have my semi-infinite line. I have my initial concentration like this. So here is x. Here is the concentration. And whenever particles hit the origin, they fall off and die off the cliff. And so this concentration field is gradually going to develop something like this. And we want to understand this time development. And in fact, we know how to solve this problem because I mentioned, I guess in the first lecture, that if I have a single particle starting at x naught with an absorbing boundary condition at 0, then the solution is a sum of a Gaussian at x naught and an anti-Gaussian at minus x naught. So for a delta function source, we know the solution. So that delta function source is the Green's function for the problem. So for c of x t equals 0 is equal to delta of x minus x naught, the solution we already know. And normally I would write g for the Green's function, but I'm always using g for the correlation function. So let me call this h, the Green's function, which is a function x, x naught, and t. And so this is equal to 1 over 4 pi dt e to the minus x minus x naught squared over 4 dt. And then I have the anti-Gaussian, e to the minus x minus x naught squared over 4 dt. And so now I can compute my concentration. My concentration is nothing more than c. I just have to integrate this Green's function, h, x, x naught, t, dx naught. So if I put, like, instead of having a delta function at x naught, I just have a uniform concentration or a superposition of delta functions over all x naught with the same weight, then this is my concentration field. And so let's compute it because, again, it's like I get the feeling that students are so quick to go to Mathematica and simulation, and, you know, you look at this and you get lost right away, but this is easily calculable analytically, yeah. To be explicit, there should be c of x and t, x naught and t equals 0 in the integral, right, which is 1. Say that again. So in the integral, there should be c of x naught and t equals 0, no? So you are propagating the initial condition to time t. I should be integrating the Green's function against the initial condition, which is 1. Yeah, so I just kind of assume that. All right, so let's just go a little bit further here. So there's 1 over 4 pi dt is a common factor out in front, and then I have the integral from 0 to infinity, and I have e to the minus x minus x naught squared over 4 dt minus e to the minus x minus x. By the way, I made a mistake here. This is the Gaussian at x naught. This is the anti-Gaussian that should be centered at minus x naught, so this should be a plus sign here. So this is e to the minus x plus x naught squared over 4 dt. And then I want to integrate this thing dx naught. So what I do here is say, well, in the first integral, I'm going to say for this integral, I said let y is equal to x minus x naught over the square root of 4 dt. And in the second integral, I'm going to say let y equal to x plus x naught over the square root of 4 dt. So in the first integral, it becomes just e to the minus y squared. The 4 dt is already absorbed by this, so there's an overall factor of 1 over root pi. So I have 1 over root pi, and in the first integral, when x is equal to 0, this is an integral from minus x naught over 4 dt square root to infinity of e to the minus y squared dy. And in the second integral, I have a minus sign, and I'm going to have the integral where x is equal to 0, then this is x naught plus x naught over 4 dt square root to infinity of e to the minus y squared dy. And so we have the same integrand, and we're integrating from minus x naught to infinity, and we're subtracting away x naught to infinity, so the integral itself is just the integral from minus x naught over 4 dt to plus x naught over 4 dt. So this is equal to 1 over root pi integral from minus x naught over 4 dt to plus x naught over 4 dt square root e to the minus y squared dy. And this is a basic special function. This is nothing more than error function of x over 4 dt. And so that's the answer for c. And now if we plot this error function, so the error function is a function, so let me sort of plot it. So here is the argument z, and here is error function of z. So it starts off linear, and it saturates to 1 and minus 1. So it's a function that looks something like this. So that's the error function. And so from that, we can now infer what we expect to see for both the concentration and the correlation function. So if I plot now c as a function of x for different times, so initially the concentration is just 1. And so if the time is very short, then as x just moves a little infinitesimally, the argument gets very big very quickly. When the argument gets big, the argument, we get very quickly into the asymptotic regime. And so at short time, our function looks something like this. And then as time goes on, it's doing something like this and something like this. So that's what c is doing. And so what g is doing is just 1 minus that. So here is g as a function of x. So in this case, g starts off with 0, and we have a boundary condition 1. And so what g is doing is just the 1 minus this. So it's doing, looking like this. And so as time goes on, it's getting flatter and flatter. And there's a characteristic range over which the correlation spreads. And this range is growing like square root of t. Well, by the way, yeah, it's just square root of t. So that's the main result that we learned from this is that even though the magnetization is conserved in the one-dimensional Ising model at zero temperature, if I start with an initially uncorrelated system, correlation spread at a rate square root of t such that spins get more and more correlated and this goes on forever. So this coarsening process is essentially governed by the formation of domains of correlated spins where the correlation range is growing like square root of t. Now that I've solved it analytically, let's try and look more pictorially at what we might expect to see. And so let's, now I can erase all of this stuff here. I'm sorry. Yep. Do you mean that correlation range also increases or just correlation increases but the correlation range remains like the nearest neighbor? No, no. So again, this is, we're looking at the nearest neighbor. I mean, sorry. This g of x, here, x is a separation between two spins. So I'm looking at the general correlation function at arbitrary distance. And the point here is that this, you know, like here, this would be the nearest neighbor correlation at a distance one. And so this nearest neighbor correlation quickly gets larger and larger. If I look out here at, say, a distance 100, the correlation between spins a distance 100 apart, it stays close to zero for a longer time but eventually it also rises to one and the time that it takes for the hundredth neighbor correlation function to rise will go like 100 squared. And if I look at the thousand neighbor correlation function, it will also rise eventually but it's all the range of which a correlation spreads is governed by the square root of t. Thank you. Okay. Okay. So the thing is that now that we've seen what happens algebraically, let me show you pictorially what's going on because it provides an additional insight into the problem. So once again, let me maybe give a little title that's equivalence to domain wall dynamics. I have raised the flip rate. I needed the flip rate. So let's go back. Let me just write the flip rate again. Wi, which is equal to one half, one minus sigma i over two, sigma i minus one plus sigma i plus one. So now I'm talking about the zero temperature limits. I'm not worrying about this factor gamma. And so let's look at what this means for different configurations of spins. So if I have a domain wall, so I have like a domain of plus spins and then there is a domain wall particle and a domain of minus spins. So suppose this spin were to flip. If this spin were to flip, then the configuration afterwards is a bunch of spins up. This spin is now flipped. This spin is still down. And the domain wall particle has moved to the right. So we can think of, as a domain wall moves by plus or minus one, there's nothing more than simple diffusion of a domain wall particle. And equally likely this domain wall particle can move to the left or to the right. And in fact, if we look at this flip rate, notice that when this spin flips, there's one spin misaligned and one spin to line. When it flips, it's still in the same environment of one spin aligned and one spin misaligned. There's no energy change. Delta E is equal to zero. And similarly, if you look at the flip rate here, if these two spins are misaligned, the neighbor of this particular spin has one neighbor up, one neighbor down. So the sum of the spins is equal to zero. So I get zero for this guy. I have one with a one-half. So the flip rate is equal to one-half if I have this configuration of spins. So if I'm trying to flip this guy, well actually, let me not make it more complicated than I need to. So if I have a spin whose two neighbors are misaligned, then there's no energy cost for flipping it. And according to the globular dynamics, the flip rate is one-half. On the other hand, if I have something like this, so now I have two misaligned spins in a row. Yeah, sorry, I just want this. So I have one misaligned, one spin down and a C of plus. Now what I have is two domain walls which are nearest neighbor to each other. And if the central spin flips, now there's no domain wall. So these two guys have come together and they've annihilated. And notice that when this spin flips, the energy delta E is equal to minus twice J. So whenever you have an energy loss, it corresponds to domain wall particles disappearing. And according to the flip rate of the one-dimensional Ising model, if sigma I minus one and sigma I plus one are opposite in sign to sigma I. So if these are both same, plus one, and this is minus one, then the sign becomes a plus here. One plus a half times two, and so this is one plus one, so it's two divided by a half. So you get a flip rate of one if you have this environment. Plus, minus. And finally, the last case is, well, what happens if it's an energy raising event? So an energy raising event happens if I do exactly the opposite. If I have like a domain of spins, they're all happy because they're all aligned. Now, because of thermal agitation, which does not exist at T equals zero, but let's now suppose there's a non-zero temperature, if this spin flips, then I create two domain walls, then there's an energy gain of two J, which it doesn't like doing, but then if I look at the flip rate of the Ising model in one dimension, in this case, all that happens is now sigma I is aligned with sigma I minus one and sigma I plus one, and so you're going to get one half plus two times two, so it's one minus one, so you get zero. So the flip rate is zero for the configuration like this. And so now we can make a simple descriptive mapping between the dynamics of the Ising model and the following reaction scheme of particle plus particle annihilation. So this whole reaction scheme is exactly equivalent to this bimolecular reaction scheme of freely diffusing particles in one dimension because diffusion is free, it doesn't cost any energy, so particles diffuse and wherever two particles meet at the same site, they just simply annihilate. And so the dynamics of the one-dimensional Ising model is exactly isomorphic to a reaction process of particle particle annihilation. So one can do a lot with this mapping, and the one thing I have not discussed here because it's more complicated is when you're doing dynamics with finite temperature where there's also pair creation, but now we have a sense of what happens at some small finite temperature which is that your domain walls diffuse around, they annihilate when they meet, every once in a while there's pair creation, so there'll be some steady state where the pair creation rate is balanced by the annihilation rate and so you have like a dilute gas of these domain walls that are diffusing around, that means that there can't be any spontaneous magnetization because one needs everybody to be aligned to have spontaneous magnetization. And in fact, this was the source of Pirol's original argument to argue that there would be no spontaneous magnetization above one dimension because we see that in one dimension at non-zero temperature there's a finite density of domain walls and he was trying to develop that for two dimensions show that there should be a density of domain walls which would lead to no magnetization but that argument turned out to be incorrect. Okay, so I guess that's all I have to say about the one-dimensional Ising model. Questions? Maybe, how do you simulate this like this system? What do you mean by how do you simulate? What system are you talking about? The Ising model? Yeah, the one-dimensional Ising model this dynamics what is the idea of the simulation? Well, so like the simplest way is that you would just you put your spins on a one-dimensional line and first of all if we're doing say finite temperature simulation that means that every spin has a non-zero flip rate so you pick a spin at random you know its flip rate so at non-zero temperature these will involve things involving gamma and this will be non-zero but you just pick a spin at random and with a certain probability proportion to its flip rate you just flip it and then you just do this over and over again so that would be a very naive simulation because every single spin you've got a test but for finite temperature that's what you'd have to do but in general the message I would say to anybody is like there's no point in simulating an exactly soluble model you might as well simulate a model that's not exactly soluble but if you want to try it just for fun then that's what you would do Does that address your question? Yes, yes, thanks Sure, okay Anything else? Okay So what I want to do now in the remaining time is actually describe a closely related spin system to the Ising model, it's called the voter model and for any of you who've worked in social dynamics maybe you've even heard the terminology but the voter model is in some sense a fruit fly of interacting particle systems because it's the simplest interacting particle system that we know of it's exactly soluble in all spatial dimensions and the method of solution is also very simple and very pretty and one gets a lot of useful insight from it and it's also very closely related to the Ising model itself and so it's good to this is a good time to introduce the voter model and describe the similarities and differences with the Ising model and then to solve the voter model So the way that the voter model was first it was first introduced by mathematicians and you know I hope there's no mathematicians in the audience here and we who are physicists like to think well the mathematicians they want to prove the truth and we're only content to know the truth and then another aspect about mathematicians is that in general their model construction is always very formal and very rigid but it turns out that the mathematicians beat us physicists to the punch in the voter model because they invented the voter model it's a very beautiful model, very simple and then we just sort of look at their coattails anyways voter model so the voter model has a very simple descriptive way of presenting it so let me first tell you the descriptive way and then I'll write some equations so the way you can think of the voter model is that you have a group of people in the room so in this room at the moment we only have three people I shouldn't reveal, maybe I shouldn't have said that anyways and we each of us is endowed with a two state opinion and we vote for Trump or we vote for Sanders and that's all we have and the rules of the game for the voter model are the following is that you pick a random voter, I pick me I'm a random voter, I pick another random voter I won't identify who the other people in the room are so you can't get in trouble with the Italian government and I ask who are you going to vote for in the election and you say I'm going to vote for Trump and I think wow, that's a great idea I'm going to vote for Trump too and you repeat this update process over and over again until in a finite system necessarily you reach consensus and then the question you might ask is how long does it take to reach consensus and given some initial condition what is the probability of reaching Trump consensus versus Sanders consensus as a function of the number of initial voters of each type so that's the dynamics of the voter model so you see just by its description it's a little bit like the eising model but it's not quite the eising model and I'll tell you why it's not quite right now so for the voter model to understand its dynamics we have to write down the flip rate so what is the rate at which a voter changes opinion by the way in going back to like this opinion dynamics type of model in this description where I ask someone else like who you vote for and I just adopt their state that means I have zero self confidence so it's a population of Lemmings who have no self confidence and only adopt the state of their neighbor and so if I want to make this formal the rules of the voter model is you pick a voter at random it picks a random neighbor it adopts its state you repeat this over and over again and in a finite system you'll necessarily reach consensus so with that description let's now write down what is the flip rate of a given spin and so this is equal to one half so let me write it down and then let me justify it sigma i divided by z summation sigma j j neighbors of i here z is the coordination number of your lattice so so I'm assuming now that my voters are living on the sites of a regular lattice it doesn't have to be that way but for simplicity let's just do sites on a regular lattice so it looks something like the eising model in the sense that it spins on a lattice or voters on a lattice and let's just verify that this works so if everybody agree if I'm in a neighborhood if I'm in Trump land and I am a Trump voter so suppose I'm a Trump voter that corresponds to sigma i equals plus one but all my neighbors are also plus one and I have z nearest neighbors so this guy is going to be plus one z plus z I divide by z so I get plus one and if I'm a Trump voter I'm also plus one I get one minus one I get zero so if I'm in a green with all my neighbors I don't flip on the other hand if I'm in a c of Sanders voters and I'm a Trump voter and if I call Sanders minus one then I have minus z for this guy and I divide by z so I get minus one that conspires with this guy to give me plus one and I have a flip rate of one and in general you can convince yourself flip rule is the same as proportional rule so it's proportional to the fraction of neighbors that disagree with you let me just do one more case just to convince you of this so suppose that my environment is for example plus minus minus I want to do it this way plus plus plus so in the easing model as I mentioned this morning the easing model zero temperature is majority rule if I'm a plus spin here I stay plus I can't change my state but according to this rule here so the flip rate for this spin so it's going to be one half so w of i is equal to one half and so it's one minus so this is plus one one over four because there's four neighbors and then I have one, two, three minus one so it's two and so that's one half is equal to one fourth and one fourth is the fraction of neighbors that are in the minus state and you can convince yourself just by working through the other possibilities so the flip rate of this guy is one fourth the flip rate of this would be one half the flip rate of this guy would be two neighbors and the flip rate of this guy would be equal to one and so it's just proportional to the number of disagreeing neighbors and another feature about this is that there's no you know the sum of the spins is not inside of some hyperbolic tant or something like that and that's why the algebra of these signal operators works for you to actually compute everything that you want to do okay so let's now use this to compute the average spin and the average correlation as we did for the one dimensional Ising model so the steps are all very much the same and so I may speed up a little bit here but let's just start so let's now compute s i dot which is the thermal average of sigma i dot so I'm going to compute this so this is minus twice sigma i w i again this is our basic equation of motion that if a spin flips it changes by minus twice its value and the rate at which it flips is w sub i and so this gives me that here's my rate equation and so what is this so again the two and one half cancels the first term here is just minus s i and then I'm going to have plus and so the sigma i kills this guy and I'm going to get plus one over z and then I'm going to have summation s i s j j and neighbor to i so that's what s i dot is and like the natural thing to look at is the magnetization because that's sort of the basic one body of function so we look at m dot which is the summation s i summed over all i divided by n so here we get minus one magnetization here we have z neighbors so we have z magnetizations divided by z so we're going to get minus magnetization plus magnetization zero so it's the s i dot right where s i dot divided by n oh dot thank you thanks for keeping me honest so again it's just as in the one dimensionalizing model we get nothing so let's look now at the correlation function and hopefully it's not going to come as a surprise that the correlation function is going to satisfy the same equation as the correlation function the one dimensionalizing model the new feature here is that because it's proportional rule rather than majority rule that diffusion like equation that held in one dimension actually holds in all dimensions and so that's what makes the voter model so much simpler because it's basically solving the diffusion equation in arbitrary dimensions so now let's look at d i j which is equal to sigma i sigma j and I want to compute the time derivative of this thing so time derivative of that this is equal to minus two sigma i sigma j w i plus w j and then they do exactly the same you know rigmarole as before so there's a minus two but the flip rate has a half in it so let me now tell that out right from the very beginning so minus sigma i sigma j and then I have one minus sigma i over z summation sigma j and let's hopefully trust me that it's nearest neighbor to buy that should be sigma k right it's not the same j as outside thank you thank you yes this is a summation of neighbors so let's call this summation k thank you I would have eventually found it but good to find it earlier than later and then I have plus so that's w sub i and then I have plus w sub j so one plus sigma j over z summation over k again but that k is not the same as this k it's just a dummy index and I'm summing over sigma k so you know when you start I mean and we're also doing this but like to make life simple let's just live on the square lattice because it'll be easier to try and figure out what's happening here so if we're doing this on the square lattice for the first term we have sigma i squared so that's one and then we have sigma j with you know with the summation over nearest neighbors so in fact to keep my indices a little bit more clear let's just go to g x y dot so now I'm sorry I'm sorry that ain't going to work so let's see what I want to say here so maybe it's better to like do a picture yeah that'll make it more clear so we have site i we have a site j and when I look at this correlation function so k are the neighbors of i so the neighbors of i are this site this site this site and that site so and I have the correlation function it's like I start with the correlation function i j and I have the correlation function of the four neighbors of site i and similarly for this term I'm going to have again my i j but I'm just looking at the four neighbors of site j so here is i here is j and so basically the point is that you know if this is a certain distance apart then the distance to the north guy is the same as the distance of this one from the south guy so it's the point is that the two different sets of terms for a translation invariant system are the same you know the same distances away from each other and so it turns out that if you then so when I say that this contribution is the same as this contribution and then I have so now I'm going to write it this way it's just more clean which is g at coordinate x y t dg by dt and that's going to be just nothing more than g well so there's the term out in front so there's a minus g x y and then there's plus and so let me just write it out x plus one y t plus g of x minus one y t plus g of x y plus one t plus g of x y minus one t and you know again it's like it's easier to write it down and convince yourself of it than to me to say more words and try and do more so you should have a factor of two in the first term in the minus g thank you there's a factor of two here because there was these terms and these terms in the g x y term let me just check so if you take the equation above then there are two terms one just bear with me a second so going back to here yeah so there's two yes there's two terms with the one so this is two thank you yes and then there is no factor two there and here you have one dimension essentially right what do you mean one dimension that is for one dimension this is for two dimensions because I have two arguments I have the x and y coordinates so I drew this in two spatial dimensions so you are taking x and y as the difference between the so I'm using now x and y so yeah I see what your concern is and so how do I fix this because it's like if x is coordinate of x coordinate of i minus x coordinate of j then I think it's ok ok alright thank you thank you for saving me yes so that's a good point so again here is site i j and so this I'm calling x and this I'm calling y the difference between the x coordinate is x and the difference between the y coordinate is that and yes and then everything works out all I'm doing is just shifting the x and y coordinates by plus or minus one ok so um yeah sorry that that wasn't quite kosher what I did but now the end result is we get a very simple equation for the correlation function sorry then I think you need the factor 2 because as you were saying both terms are giving you the same yeah so that's why I was trying to say that these terms are the same as those terms so I'm going to put the two back here ok very good and so but now we take the continuous limit so let's look at the continuous limit so in the continuous limit if I just had a one here then I'd have you know that the time derivative is equal to yeah so if there was just no it's fine so in the continuous limit it's saying that you know this is the this is sort of the average, this is g you know summed over its nearest neighbor minus g at a given point this is just nothing more than the Laplace operator the discrete version of the Laplace operator sorry again don't be sorry it's better you tell me you missed the factor 1 over z 1 over z in the second terms oh you could vault so in the second terms no but the thing is that you know when I do the summation over all neighbors there are four neighbors so I didn't I didn't lose too much okay continuum limit so in the continuum limit I hope it's not a surprise that what this translates to is nothing more than gd dt is equal to Laplacian of g and so what I want to spend the rest of this lecture oh it's only 330 what I want to spend the rest of this lecture doing is solving this for this geometry of the voter model because it's one of these things where if you you can do very powerful things in very easy ways with the right approximations so we want to solve this equation but again we have the same boundary conditions which is that g of 0 and t should be equal to 1 that is yourself correlated with yourself but now there's actually a little bit of a subtlety associated here because this is true for one spatial dimension but in higher spatial dimension the appropriate boundary condition is g of a t is equal to 1 so this is true for d larger than 1 where a is some number which is strictly positive but very very small so the point here is that clearly you are if you have discrete spins you're perfectly correlated with yourself but as you're going to see if you want to think about the continuous limit basically we're solving the diffusion equation so it's random walks and we're going to see that in two dimensions a random walk can never hit a point it can only hit a finite size sphere so we need a lower cutoff a lower size for this lower cutoff here a non-zero size in order that we have a well defined boundary condition and this will be clear both mathematically and physically as I go along and then the last point in all of this is that I need to have g at position r now at some t equals 0 is user defined and so normally I'm going to take uncorrelated because that's the simplest case to deal with which means it's equal to 0 so initially there's no correlations in the system I'm perfectly self correlated with myself I evolved by the diffusion equation and we want to figure out what is the long time behavior and it turns out that there's very different behavior in one dimension two dimensions and higher than two dimensions and the difference in solutions as a function of dimension is generic for many kinds of many body problems in physics and it's important and that's why I'm spending so much time in this example because we're going to see we'll get a lot of general insight just by beating this particular example to death so let's first of all solve for the correlation function so we can do it in general spatial dimensions larger than two but let me just do the case of larger than two dimensions and in fact I'm not going to solve this problem because it doesn't lend itself to a nice geometric interpretation but I'm going to solve the cousin of this problem which is I'm going to solve I'm going to instead go to c which is equal to 1 minus g and then the equation that c solves is ct is equal to Laplacian of c with c of 0 t is equal to 0 so we start with that's the absorbing boundary condition in one dimension or c of at equals 0 for d larger than 1 and we start with sort of a unit concentration so c at x t equals 0 is equal to 1 that turns out to be easier to think about so let's now solve this for d larger than 2 so we're solving dc dt is equal to Laplacian so that's d2 c by dr squared plus d minus 1 over r dc by dr that's and we're dealing with like a this is a spherically symmetric problem so we don't have to worry about angular variables so now let me tell you a fact that at the end of this lecture maybe the beginning of the next lecture I don't believe but let me just tell you a fact which is that in the long there's a time dependent solution here but this time dependent solution converges to an equilibrium static solution and so I'm just going to set this equal to 0 just forget this guy and now we have an equidimensional equation and so that means that the solution is in the form c of r is equal to r to the alpha we plug it into the equation and we're going to get what you call it indicial equation alpha minus 1 so when I differentiate twice I'll get r to the alpha minus 2 times coefficient alpha times alpha minus 1 but the power of r is common everywhere so I'm not going to write it d minus 1 alpha is equal to 0 so that tells us that our solutions have alpha is equal to 0 or alpha is equal to 2 minus d so it tells us that c of r the equilibrium solution is a plus b divided by r to the d minus 2 and now we want to impose the boundary conditions so at c of a is equal to 0 so that gives me that a plus b over a to the d minus 2 is equal to 0 so that a is equal to minus b over a to the d minus 2 and so what? what's equal to 1? c of a c of a I know it's c of a sorry sorry sorry sorry yeah it's okay so what we're going to get then is some b 1 over r to the d minus 2 minus 1 over a to the d minus 2 and then the other boundary condition is that at oh yeah okay and then the last thing is the following which is that you know I start with my concentration is initially equal to 1 so this is c as a function of radial distance and so we're going to some steady state solution so this will be c infinity and the other boundary condition at tautically is decaying like you know 1 minus 1 over r to the d minus 2 but we know that infinite distance it's got to go to 1 so the other boundary conditions at c at infinity is equal to 1 and so that tells me that b is equal to minus a to the power d minus 2 power and so from this a over r to the d minus 2 power so that's my solution for the concentration field which means that the correlation function g of r in higher dimension is 1 minus this so this is a over r to the power d minus 2 power so what does this tell us it says that if we had voters that are living on a lattice of bigger than two dimensions and you know I'm a Trump voter and let me look at distance 100 miles away what is the probability that a voter 100 miles away is a Trump voter that's what this is telling me and the probability is decaying like 1 over distance to the 1 power in three dimensions and it's saying that you know I mentioned at the very beginning that a system necessarily reaches consensus there's a little cortisol here it's a finite system for the voter model there is a steady state correlation profile that's set up which decays like 1 over r to the d minus 2 power so it's long range correlation with the power law decay ok so that's the three dimensional case let me now solve the same problem in one dimension and again here's where I needed more blackboard space because I hate to erase but I'm going to have to do it so well actually we've already solved the one dimensional case because that's what I spent that error function solution that I discussed previously that's what we're solving in fact the one dimensional voter model and the one dimensional easing model are the same thing but I want to solve it a different way because the problem is that when you're dealing with problems with correlation spreading in the system so again let's just look at this sort of just holistically here is my concentration profile we saw that this concentration profile you know there's sort of like it's an error function profile so there's sort of a characteristic range which grows like the square root of t and this is growing growing growing and you can think of this is that there is a moving boundary you know here c is equal to 1 here c is much much less than 1 and there's a boundary point which is moving like square root of t and it turns out there's something known as the quasi-static approximation so it's almost a self contradictory statement quasi-static because static means static but quasi-static means sort of not static and when you have slowly moving boundaries it turns out that the quasi-static approximation allows you to solve incredibly complicated problems with very simple methods and to give you a feeling for what you can solve is like when I was first a graduate student I was presented with the following problem in fact you know we had this sort of I don't know what you call it a general exam you know to see if you're competent and I really love this problem it was you have and maybe in Italy you don't have so much experience with this but you have a cold lake it's winter time ice starts forming on the lake how quickly does the thickness of ice grow as a function of time very slowly and it turns out if you want to solve the problem exactly it's an example of what's called a Stefan problem moving boundary value problem but if you use the quasi-static approximation you can solve it in two lines and you get the answer so I'm not going to tell you what the answer is because you know maybe you want to tell me what you think the answer is but you can ask how quickly if the ambient temperature is below zero degrees celsius the water by definition is a huge reservoir that stays above you know stays at zero celsius so there's a thick you know there's a layer of ice that's growing how thick does the layer grow but it turns out that what I'm going to show you here essentially solves that same problem so v equals one solution we know the exact solution but let me do the same thing by the quasi-static approximation and part of the reason I'm doing this is that this same approximation I can apply in two dimensions and in two dimensions you'll see that the solution is very elementary so if you try and do it exactly you run into horrible Bessel functions and you know you have to look up in a brownwoods and Stegen and it's very easy to get lost but with the quasi-static approximation one can do it without any fancy mathematics so the quasi-static approximation consists of the following thing which is I'm going to solve ct is equal to the Laplacian of c so I'm solving the same problem I have c of zero t is equal to zero and the boundary condition for d equals one but I have a second boundary condition which is at c at square root of dt t is equal to one so I'm kind of assuming that you know this is a slowly moving boundary it's moving at a rate square root of t so diffusion can like mix up things in a range of root t but outside of root t diffusion hasn't had a chance to act and so the concentration should just be that concentration so I'm replacing this true thing by a picture like this that there is a second boundary at a distance root dt and it's moving to the right and here we have c equals one here we have something that we're going to compute using the quasi-static approximation and so the third part of the quasi-static approximation is that we now forget about the time derivative and so we replace the time derivative by a moving boundary and because it's a one-dimensional problem this problem is extremely simple because we know that to solve the Laplace equation that's a second derivative equals zero so that means the first derivative is a constant which means that the zeroth derivative or the function itself is a linear function that linear function has to be zero here it has to be one here so I can just write down the solution c x t is equal to x over the square root of dt oh I notice that you can see the letter d here because I'm always so used to writing the diffusion equation where there's a diffusion coefficient out in front but I don't have a diffusion coefficient so let's get rid of it so there's just t here sorry so this is x over root t and this is the asymptotic form of the error function solution that I derived already before but now we come to the more interesting case which is what happens in two dimensions so let's solve it in two dimensions so the first point is that I want to solve again here is now radio coordinate here is c of r and now I'm going to put like a little a here, I need a little a and now let me spend a few minutes discussing why I need this little a because it's important so it turns out that in the theory of diffusion or random walks there is a phase transition as a function of the spatial dimension in the character of the random walk a random walk in two dimensions below is what's called recurrence which means that it visits every single site infallibly often above two dimensions it's transient and it may not visit every single site of the lattice and so you might remember that on the very first lecture I mentioned very briefly about the first passage properties of a 1D random walk and the fact that a random walk is certain to return to the origin but it takes an infinite amount of time to ask what is the same question or what is the answer to that same question in general spatial dimension and so this is a bit of an interlude here so recurrence versus transience so let's imagine a random walk in arbitrary spatial dimension so it's making some kind of a trajectory whatever it is so then it starts here and ends over here and so what it does is it's making its trajectory is that it's exploring whose characteristic size is the square root of dt where d is a diffusion coefficient and now I can ask well how many sites did I visit in a time t so number of sites visited or volume visited so if I think of like a discrete random walk I can talk about number of sites visited if I think of my random walk as a little particle of finite size it sweeps out some volume or I can compute the volume visited but the number of sites visited it's going for a time t so clearly it visits t sites and then the density of sites visited so that's t divided by the volume but the radius of the sphere is square root of t and so the volume is the radius to the power d and so that's t to the power d over 2 so this scales like t to the power 1 minus d over 2 and this function has different answers depending on the spatial dimension there's an infinity in the long time limit for d equals 1 it's equal to 0 for d larger than 2 and in one in two dimensions well it's kind of ambiguous going on here and it turns out one has to be careful at the critical dimension and it turns out there's a logarithmic term here so this thing is still infinity but only logarithmically infinity infinite in d equals 2 and what this tells me then is that infinitely often in one dimension and two dimensions which means that the random walk is what's called recurrent it recurs, it comes back to every single site infinitely often whereas in three dimensions the chance of hitting every site is zero which means that there's a finite chance that you won't visit a site so a more pictorial way of thinking about this is that if you send your kids like for you kids here my kids were like 20 years old they went out into the world and was they ever going to see them again well if they were doing random walks in one in two dimensions I could be sure that I'd see them again it might take forever for them to come back to me but I would see them again but if they were doing a random walk in three dimensions there's a finite chance I'll never see them again but this difference between high dimension and low dimension is what drives many beautiful properties of random walks but now coming back to our problem of the voter model here we were solving the concentration of the voter model with a cliff at the origin and the point was that we were always going to hit the cliff and and that meant that the concentration field is always changing with time so there was a continuously varying concentration field in three dimensions I said we went to a steady state of rows because there's a finite chance that you never come back to the origin and because there's a finite chance you never come back that fraction is essentially giving rise to the steady state density profile in two dimensions we are sure to return because of the second line here and so there'll be a time there'll be a continuously varying concentration field which corresponds to a coarsening process or like a spread of correlations in the two dimensional voter model and that's what I'm going to turn to next and then there's only one more thing which is that one has to be careful of which is that even though a random walk will visit every site infinitely often in a two dimensional lattice if you now go to the continuous limit if you give your if your lattice sites are just points and you have a continuous diffusion field the chance of hitting a single area is actually zero so you need to give it a finite volume or a finite area which can be arbitrarily small but it has to be non-zero and that's why we need a boundary condition at a rather than a boundary condition at zero so once again what I want to solve is dc dt or ct is equal to Laplacian of c in two dimensions with c of a t is equal to zero and c of rt is equal to zero is equal to one so this is a hard problem to solve I mean you know again you know if you know your Bessel functions and all that it's not so hard so the solution involves Bessel functions you got to work a little bit but you get the answer but since I don't have my Brahmats and Stegen right with me and I forget like which Bessel function this is I want to solve this with elementary methods and so the elementary method then is we're going to use the quasi-static approximation so in the quasi-static approximation I forget about this and then I just say well somewhere out here at a range of the order of root t the diffusion field has you know like the fact that there's an absorbing boundary condition here can't propagate out further than root t so out here my concentration field is one and inside of here there is like some varying concentration field whose solution is given by the solution to the little plus equation so that's all I'm going to do so now I want to solve again we want to solve d second c by dr squared plus one over r dc by dr is equal to zero so we want to solve this and so you know again we it's an equidimensional equation because the same powers of r in both sides here so the solution is typically a power law but when you then write down the indicial equation there's two indices alpha equals zero and alpha equals d minus two but in two dimensions that's also the same as zero so we have a degeneracy here and what's known again you know if you've taken a course in differential equations this is all very standard stuff but if you haven't taken it for 20 years you've forgotten this all but when the two indices are equal that means that the solution is the form c of r is equal to a plus b log r so in some sense log r is like exponent equals zero and so if you had triple degeneracy three indices I mean if you had like a cubic equation for the indicial equation and you had three solutions that are zero then it would be a plus b log r plus c log r squared okay okay so that is our solution and then we just have to impose the various boundary conditions so we know that c of a is equal to zero so that gives me that a plus b log a is equal to zero and we also have the other boundary condition that c at square root of t is equal to one so we're going to get you know a plus b log square root of t is equal to one so there's two equations two unknowns here to solve and please allow me because this is the place I really suck at let me just not do this on the blackboard let me just write down the answer so c is equal to logarithm of r over a divided by the logarithm of square root of t over a and so you see that when r is equal to a I have log of one so that's zero and when r is equal to root t then I have the same thing above and so it's equal to one so that's what this profile looks like so we have a steady state profile which you know has this logarithmic character to it so now we're in a position to like come back to the voter model and get some insight into the process so you know summary if I'm looking at the correlation function g of r and it might depend on time so this has three different behaviors depending on whether you're living in one dimension two dimensions or higher dimensions so in one dimension this is going like one minus one over square root of t so I'm only kind of writing the leading asymptotic behavior in two dimensions it's going like one minus you know log r over log t and in three dimensions it's going like r over a to the power whoops other way around a over r to the d minus two power so both in the case of one dimensions and two dimensions correlation is spreading out so as as I'm sorry this is r over t not one over t r over root t so it's saying that as time goes on things get better and better correlated in both one dimension and two dimensions and this is for d larger than two whereas above two dimensions you settle into like a steady state correlation profile that decays as a power law in distance so it actually suggests that if you deal with a finite system that the approach to consensus is very different above two dimensions compared to one dimension because in one and two dimensions the system basically coarsens and the movie that I showed this morning of the coarsening of the one two dimensional ising model you could do the same movie for just the voter model in both one and two dimensions well the voter model in one dimension is the same as the one dimensional ising model so there's nothing to discuss there in the case of the two dimensional voter model it has a similar kind of coarsening of the one dimensional model versus majority rule it turns out that there's no surface tension associated with the interface so the interface remains very rough but still there's a coarsening that you can see that goes like the square root of t but then in three dimensions one settles into a steady state correlation profile and one needs an exponentially rare fluctuation to allow the system to reach consensus at four o'clock yeah okay so actually I have two more things to say about the voter model but I don't think I can talk anymore I'm kind of tired so I'd like to stop and I'll just I'll pick this up last next time and by next time I'll finish the discussion of spin dynamics sorry I have a small question if you have any by the way whenever you ask a question please I'm begging you don't start with sorry because you're you know asking a question okay so it's about the magnetization so we if I remember well but maybe I missed something we said that the magnetization in the voter model was the derivative was zero of the magnetization yeah the time derivative of the magnetization is zero magnetization is conserved yes so it seems strange to me that you say that we reach consensus but the magnetization doesn't change yeah so okay a very good question and I guess I have a couple of answers to that first point is that it is a fact that the magnetization is conserved and one way you can see that without any equation is that if you pick two misaligned spins then the down guy could go up but equally likely the up guy could go down so if you average over those two events the average magnetization does not change so if you begin a system with zero magnetization is equally likely to end up with all spins plus or all spins minus but you will reach consensus but the point now is that you have a non-trivial question of like what is the probability of reaching each type of consensus so if the initial magnetization is zero the final magnetization is zero which means that equally likely you'll end up with plus or minus consensus if the initial fraction of up spins was three quarters for example then three quarters of time you'd reach plus consensus and one quarter of the time you'd reach minus consensus and that would also give you the same initial and final magnetizations does that answer your question? yeah thanks yes yes because that was yeah yeah makes sense makes sense okay very good any other question so I have one myself but probably it's one of the two things you are going to discuss okay because you can solve the water model also in the mean field right yes and there you see that you reach consensus right and so this has to do with the fact that here you are taking a limit where the size of the system goes to infinity before the time goes to infinity or well I mean the thing is that the complete graph water model is also very easily solved because like there's no geometry so in some sense looking at one side of the system is enough to determine everything but a pathology about the complete graph is that the coordination number of the graph is equal to the size of the graph normally for like a lattice graph in d dimensions you know the coordination number is like it's like you know it's a finite number looking at the ratio of the coordination number to the number of spins in the lattice that's going to zero whereas for the complete graph these two things are same order of magnitude but again yeah because you know the only variable in the complete graph is just the number of spins pointing up I mean you don't there's no space anymore it reduces to an effective one dimensional problem and so one can write down one dimensional rate equations for the probability that you reach plus consensus or minus consensus or how long it takes to reach consensus but in general I think you can prove that on a finite graph magnetization is a martingale exactly that's the that's the other thing you're going to say tomorrow okay I'll say tomorrow no I mean it's one of the two things you were going to say actually I wasn't going to say that but I mean since you mentioned I'll mention that because it's so simple and so beautiful yeah okay I'll mention that so for those of you don't know the martingale is you'll know by tomorrow other questions so if not we thank Sid again and see you tomorrow at 9 a.m. or not see you tomorrow at 9 a.m.