 I know it's intense and it's been tough. The weather is helping you out a little bit, ensuring you're up during the break. It's a pleasure to introduce the lecturer for this gentleman, John Walker, from the University of Oxford. Good afternoon, everyone. Thank you, Claudio. So it's very nice to be in Trieste, and many thanks to the organizers for inviting me. The organizers gave me this title, and it's actually a fine title for the things that I'm talking about, but it could mean lots of other things as well, which I'm not talking about. So let me start out by trying to explain the general direction that I'm going to take in these lectures. So what I want to talk about are systems that have a macroscopic number of degrees of freedom, but with some constraints between those degrees of freedom. And you could think about that either in a classical setting, which is where I'll start, or you could also think about it in a quantum setting where these constraints would be imposed on the Hilbert space. And so in the classical setting, we're really thinking about statistical mechanics questions. And what we're interested in, for example, is what the consequences are of the constraints in terms of correlations. And as often in this area of condensed matter physics, what we're interested in are phenomena which have a chance of being universal. So rather than some local consequences of the constraints, what we're interested in is long distance consequences. And in particular, the degrees of freedom that we start with, with these constraints amongst them, may not ultimately be a very good way of thinking about the system. They're probably what came to us when we tried to make a microscopic model. But they may not be the best way of understanding the long distance physics. And so one of the main themes is going to be identifying the best degrees of freedom to use to describe the system at long distances. So within that general setting, well, I want to talk about emergent degrees of freedom. And I want to show how in some of these situations you can wind up with excitations which are fractionalized in the sense that you set out to make a single excitation by disturbing the system at one point, but you find that you've made more than one excitation and the energy that you've added to the system can separate into independent packets. And I'll be talking about this first in a classical context, but later on I'll build on what I say about classical systems to talk about quantum systems. So the classical models which I'll start talking about are firstly the triangular lattice ising antiferromagnet and then classical dimer models. And then thirdly from magnetism, a material known as spin ice. And the long wavelength degrees of freedom that we'll be talking about will turn out to be gauge fields. And so the interest is in seeing how these arise and what their consequences are. And then when I talk about quantum magnetism, I'll talk about quantum dimer models. And if I have time, also say something about the Kitai model. So I should say please do ask questions as I go along. I'll try and remember to stop from time to time, but feel free to interrupt me. OK, just to say one more thing about a general setting before we get down to talking about details, let me make a link with frustrated magnetism and explain what I mean in a concrete setting by the idea of having some degrees of freedom and constraints between them. So I want to think about small clusters of spins in these pictures, three or four spins, with equal strength antiferromagnetic interactions between all of the spins in the cluster. So if we have ising spins and nearest neighbor antiferromagnetic interactions on a triangle, then as you've quite likely heard before, we can arrange two of the spins to be antiparalleled to each other, but nothing that we do with the third spin will minimize the interaction energy that it has with both of its neighbors. And if we go up one in the number of spins and perhaps think about classical Heisenberg spins rather than ising spins, we can think about four spins interacting equally each with each other on a tetrahedron or we could go to a more general problem of Q spins each with some number N of components with this kind of antiferromagnetic classical Hamiltonian. And to see what happens reasonably generically, it's convenient to rewrite this Hamiltonian as the square of the total magnetization of the cluster plus a constant because if you square this total magnetization and expand out the terms, you get all of the cross terms giving you the interactions and then, of course, some diagonal terms which are just constant. And thinking about things in this way, you can see that the lowest energy configuration for the cluster is one that minimizes the total magnetization, but if you have more than two spins, then there are more ways of arriving at that minimum energy than you would have expected purely on symmetry grounds. So if we start with the ising case, then we have, in fact, six ground states all together because you can have two spins up, one down, with the downspin on any of the three sites, and then you can get another three by having two spins up and one down. So that number six of ground states is bigger than the two that you would have expected from the global symmetry. And if we go to Heisenberg spins and make use of this idea that the ground state for the cluster is one with total classical spin zero, then we have to think about combining four fixed length vectors in three dimensions so that their resultant is zero. And in this picture, the black spins are supposed to be in one plane, and the green spins are in another plane. And I hope you can see that there are two internal degrees of freedom. One is the angle between these two black spins, and the other is this angle that I've denoted phi between the plane containing the green spins and the one containing the black spins. So both of those degrees of freedom are internal degrees of freedom, in addition to the global rotations of the spins that you'd have expected as ground state degrees of freedom from the form of the Hamiltonian. So these zero energy degrees of freedom are the kind of thing that I want to talk about. And the first example will be the triangular lattice ising antiferous magnet. But to put this in a broader setting, here are two well-known lattices, the Kagame lattice and the Pyrochlor lattice, which are built by taking corner sharing arrangements of those frustrated units. So you can make a good case, I think, that the triangular lattice ising antiferous magnet was the first example of a frustrated magnet that was really studied in any detail. I mean, in fact, building on the free fermion solution of two-dimensionalizing models, the basic thermodynamics was worked out in two parallel papers, one by Vanier in 1950. And then in the 60s and early 70s, some facts about the correlation functions were also worked out. So of course, if you have a ferromagnetic two-dimensionalizing model, including one on the triangular lattice, then you have a high-temperature paramagnetic phase and a low-temperature broken symmetry phase with long-range order. And the distinctive and interesting thing about the triangular lattice antiferromagnet, as long as you just have nearest neighbor interactions, is that it behaves quite differently. So first of all, there's an extensive ground state entropy. So that suggests that there are lots of degrees of freedom. Well, it tells you directly that there are lots of degrees of freedom fluctuating even at zero temperature. And then in addition, there's no long-range order. You could actually imagine having long-range order but still some loose degrees of freedom that could fluctuate. But in this example, there's no long-range order even at zero temperature. So the zero-temperature state is something that you continue to smoothly out of the high-temperature phase. But actually, in a good sense, it's more like a critical point than the high-temperature phase. And you see this if you look at the form of the correlations. You find that they fall off as a power law with this power distance to the minus 1 half. So the question is, how can we get a coarse-grained description of things and get some understanding which will go beyond the free fermion exact solution? So to get orientation, we can start thinking about ground states of this system. So for a single triangle, as I said, we have six ground states with two spins in one orientation and one in the other orientation. And if we think about how we should arrange triangles which are individually in the ground state on a full triangular lattice, then a useful first step which guides our thoughts is to break the triangular lattice up into three sublattices which I've labeled A, B, and C here. The point is that the triangular lattice is not bipartite, so we can't arrange things with one letter on half the sites and a different letter on the other half sites and set up some straightforward mail order. But if we introduce three sublattices, then we can arrange two spins up, one down, on the three sublattice sites and extend that periodically through the system. And we've got, then, an example of a ground state, instantly an example with long-range order. And you call these ground states ones with root 3 by root 3 order because the size of the unit cell is enlarged if you impose this ordering pattern because, obviously, now you have to have a distance between nearest neighbors in the unit cell corresponding to two sites that are on the same sublattice. So a picture of one of these ordered states is shown here. And the question is to understand why the system at low temperatures doesn't simply settle into one of these ordered states. And, obviously, in connection with that, we'd like to understand how this microscopic ground state entropy arises. And the first point is that there spins in the system which feel no net exchange field. So if we take this spin, for example, it has equal numbers of neighbors that are red or blue. And so it has no preference between the two orientations. And so we can flip it at zero energy cost and remain within ground states. And actually, if we start over here, then we can flip, I think it's any of these red spins, or at least half of the red spins. And if we flip red spins, which are more than a couple of lattice sites apart, we can do that independently. And so that shows you that you do at least have extensive entropy. It's not enough to show you that you have lost long range order, because if we just flip these loose spins independently, then we're leaving some kind of background behind which remembers which of these six ordered states we start from. So to understand why there's no long range order, we should go a bit further. And something else we can consider is what happens if we try to set up a domain wall between two of these six ordered phases. So of course, if you're in a ferromagnetically ordered phase and you introduce a domain wall between regions with opposite magnetization, then that costs a finite energy per unit area. But in the triangular lattice Ising and Ferram magnet, you can fit together two different domains with a domain wall, which is shown by this green line between them. And you can do it at no cost in energy. So hopefully you can see that these must be two different domains, because we have majority red spins in the top half and majority blue spins in the bottom half. And to convince yourself that the domain wall costs no energy, you want to check that the triangles either side of the domain wall satisfy this rule of two spins with one orientation, one spin with the opposite orientation, and therefore their legitimate ground state triangles. OK, so those are some of the basic phenomena at the level of cartoons. And the issue now is how to get a convenient long wavelength description of things. And the point is that if we have to think explicitly about individual spin configurations, then we do have a perfectly clear rule for what it means for the system to be in a ground state. It is that in every triangle, we have two spins with one orientation, one with the other. But it's very cumbersome, and in particular, is not the kind of thing that you could imagine promoting to a long wavelength description. And so the point of the main part of this lecture is to tell you in quite a bit of detail how this long-range, long-distance description works. And it involves mapping from the spin configurations to something called a height model. And it was worked out by Blurton Hillhorst in the early 80s. The basic point is that if we take one of these ground state configurations, we can use it to assign heights to the lattice sites. And the point about having a ground state configuration is that we can construct a rule for assigning the heights, which will give us a single valued set of heights when we run through the whole lattice. And the rule concerns the height difference when you go from one site to the next. And we also need some conventions. And so let's say that when I take the upward pointing triangles in the full triangular lattice, I'll choose to run around them in a clockwise direction. And when I go along an nearest neighbor bond between two sites, if the energy is minimum, meaning the spins are anti-parallel, I'll increase my height variable by 1. And if, on the other hand, the spins are parallel, I'll decrease it by 2. So if we start in the bottom right and go clockwise, we go to an opposite spin and increase the height by 1 unit to an opposite spin again and increase the height by 2 units. But then we close the triangle going between two parallel spins. And so we drop down two units. And clearly, this is all fixed up so that around every ground state triangle, we have an increment of 2 and a decrease of 2 also, which is what we need in order to have a single valued height. And the reason this is useful is that it's the right language to talk about things at long distances. And before I get into that, let's just think about whether this is a one-to-one mapping. And actually, it's not exactly one-to-one. So we're only talking about height differences here. So to actually assign height values, we need to take some reference site and assign a value to the height at that site. But once we've done that, any ground state spin configuration will give us a single valued height field over all of the lattice. And if we have a single valued height field over the whole lattice, it'll tell us which spins are parallel and which are anti-parallel. Well, sorry, this single valued height field better be one which satisfies these rules of differences being two or one as we go around triangles and so on. But it won't tell us which of the spins are up and which are down. It'll just tell us which are parallel and which are anti-parallel. So to resolve that ambiguity, we'd need to fix the orientation of a spin at one site. But apart from these needs to fix a height or fix a spin orientation at one site, these are roughly essentially one to one. Now, talking about things microscopically, we have heights which are different on every site. But it's really much more convenient to average the three sites and associate heights with the centers of triangles. And the point is just that this smooths things out a bit. And actually, we can undo that smoothing if we need to. So let's start out by thinking about the ordered states which I talked about and thinking on a scale slightly larger than a single triangle what the pattern of heights is. So this hexagon is an excerpt from one of the ordered states. And actually, the spin at the center is one of these flippable spins that I mentioned as being responsible for the macroscopic ground state entropy. So with this rule for the heights as we go around the triangle, the average height in the middle of the triangle will be h. And in fact, in this ordered state, all of the triangles that are drawn, and if you continue out to the full lattice, all of the triangles in the full lattice will have the same height. So you see here one way in which the height description is simple, the height field is uniform in these ordered states. So then what we should do is think about what happens when we introduce domain walls between these different ordered states. And when you think about the details of that, which I'll explain in a moment, it turns out that you can associate six possible values for the height field with each of these different root three by root three ordered states. And what this picture shows is the spin orientations on each of the three sublattices in states from this set of six ordered states and also the height fields that we want to associate with each of those ordered states. So the thing that you have to ask yourself about a picture like this is whether it all fits consistently together. And for instance, we can ask what happens if we imagine a large region of the ordered state which I've assigned a height 0 to, meeting at a domain wall with another large region that I've assigned the height 1 to. So if we think about how that goes, then we have a triangle with three sites which are in the state appropriate to a height 0. And we want to join it on to another triangle which is in the state with height 1. And the way that we do it is by having the height 1 state here, so that this site is, again, an A site. And if we think about how patching these things together would work, well, this bond will be the common bond between the two domains so that we take this one triangle and flip it and share this bond. And the height here is height 0, which you can see if you go back to a picture like this. The height at the center of the triangle is the height of the spin that's in the odd state out. So the spin that's in the odd state out here is this one. So this spin has height 0. And then if I flip things, so I have h equals 0 here. And h equals 1 here. And h equals minus 1 there. And then the down triangle I go around in the other direction. And the orientation on the A site is this blue orientation here. So going from C to A, I go up in height by one unit. And then I'm going between two parallel spins here. And the point is that the way the heights work out on the different sublattices in these two triangles means that you can fit the two phases together with a domain wall where the height changes by one unit. So there are six ordered states. But what really matters is height differences. And if we cycle through the six states, then we get back to our starting point. So h is a variable with a periodicity of 6, which will be crucially what comes later. So on the one hand, we can map from spins to heights. But on the other hand, if we want to learn about spin correlations, we need to be able to go back from the height field to the spins. And the basic point is that the value of a spin at a particular site is a periodic function of the height of the triangle with period 6 because of this point about there being six ground states. So for instance, if you look at the spin on sublattice b, as you go between these different states, then you see there are three states where it's blue and three states where it's red. And so that illustrates that this mapping from heights like to spins has period 6. So we could think about representing this function f of h as a Fourier series. And the lowest Fourier component will be a sine or a cosine with argument pi h over 3. And then because we have these three sublattices, the phase that you need to use to get from the function to the spin varies according to which sublattice you're on. So as I've explained, if we have two phases with a domain wall between them, then we have a step in the height field represented by this green line. And an important point is that when we have these steps, they reduce the amount of entropy in the system associated with the flippable spins. So for example, this blue spin here can be flipped in orientation at no energy cost, because three of its neighbors are in one direction and three are in the other direction. But if we go to spins on the domain wall, then they have a majority of their neighbors with the opposite orientation. And so they're no longer free to flip. Yes? Well, yeah, so three of the triangles can be rotated into each other. But this notation here means that keeping the orientation fixed, I'm using the three sublattice labels as I've shown. So yeah. OK, well, thanks for the question. I think it's worth going back a couple of pictures. So what I want to do with those pictures of six triangles around a circle and the sublattice labels is to refer back to this picture of the whole triangular lattice where I've labeled the sites, dividing them into these three sublattices. And then on every site that's labeled as an A sublattice, I'll take the spin orientation from that picture. And so when I talk about flipping a triangle like this, what I mean is that I'm looking at the sublattice site labels and using the corresponding spin orientation. So here I've only drawn upward pointing triangles. But in order to work out what I want to do there, I should take the spin configuration from the upward pointing triangles, read off the three sublattices and apply it. So are there any other questions at that point? OK, so the point I was making was when we have steps in the height field, they reduce the local entropy of that class of configurations because they eliminate some of the spins that were free to flip. And the whole idea of the height model as a coarse-grained description of the system is that we can incorporate that fact in an effective Hamiltonian. And because it steps in the height field that cost us entropy, this effective Hamiltonian should basically be something that penalizes gradients in the height field. So to coarse-grained, and this is the step that you probably shouldn't expect to be able to nail down in a really precise way, to coarse-grained, we promote this height field in two ways. Firstly, it was defined at the centers of triangles. And secondly, it took on integer values. So we'll promote it to being something that takes on real values and we'll promote it to being something that's defined in the continuum rather than just at sites of a lattice. And having done that, we have something that we can call an effective Hamiltonian, which will be a weight that we'll give in a Boltzmann factor to different possible configurations of the height field. And then when we do statistical averages, we'll be thinking of a functional integral over configurations of H. But saying all that, it's important to bear in mind that what we're thinking about are states which are all ground states of the triangular lattice magnet. And so there are no energy scales in the problem at all. And what we're trying to do is capture entropic effects. So it's an effective Hamiltonian in a Boltzmann weight, but there's strictly no temperature anywhere. And at least as it stands, this model has one coupling constant in it K. And you can see that it's dimensionless because we've got a double integral over space and the gradient squared. And so in the end, to get predictions out, we'll need some way of finding out what that stiffness is. So the most basic thing that you could ask is what about spin correlations according to this coarse-grained description? And so then the idea is that we should use this effective Hamiltonian and we should use the translation between the height field and the spin degrees of freedom. And we average over all configurations to the height field with this weight. And when we calculate the average of the spin-spin correlation function, we're multiplying two of these cosines together. And if we expand them out in terms of complex exponentials, then we'll need to average something like that. And it's the unfortunate kind of calculation where actually you care about factors of 2 and factors of pi in the answer, so I thought it would be worth running through how it goes. OK, so what we want to do is calculate this correlator and up to some proportionality. It's the average of e to the pi by 3 from the fact that the ordered ground states periodic in the height field. And this average is a functional integral, a weight which is just quadratic in the height field, which is, of course, what's going to enable us to do a calculation. So OK, so it's quadratic, so it's just a question of doing Gaussian integrals. And the key thing is to take Fourier transforms and to make sure that we get the factors of 2 pi right, let me spell out the definitions that I want to use. So I want to think about a system of size L with periodic boundary conditions. And I'll talk about a particular Fourier component of the height field. And then I have an inverse Fourier transform. So if I substitute this form into the Hamiltonian and do the integral, then I'll get Kronecker delta on wave vectors. And what I'll find is that I can write Hamiltonian as k over 2 and sum over q. And since I've got two powers of 1 over L squared here, when I do the integral, I get 1 power of 1 over L squared. So I'm still left with 1 power of 1 over L squared. And then I have q squared and h of q mod squared. So now I've decoupled the different Fourier components. So I can say that the average of h of q, h of minus q, in other words, the average of h of q mod squared, is just the inverse of this coefficient that sets the variance. So that's L squared on k q squared. OK, and then if I go back to the correlator that I want to evaluate, it's a general fact that if I average in the exponential something that has a Gaussian distribution, the result is the exponential of half the average of the square. So what I have is that the correlator is e to the minus a half minus from squaring the i times the average of the height difference. And then the height difference, if I substitute for the height fields here and expand, then from substituting, I get 1 over L to the fourth. And well, in general, I have a sum over q and q primed, but the only average which survives is the one where they're equal to each other. So I have a sum over q. And then from the exponentials here, 1 minus cos of q dot r. And then this is times the average of h of q squared, and then if you put everything together, you get, sorry, there's a factor of 2 missing here, because I'm taking h of 0 minus h of r modular squared. So I get the 1 either from h0 modular squared or hr modular squared, and I get the cosine from the cross terms. So I have a 2. And then if we convert the sum into an integral, an integral q dq over q squared. So I end up with dq over q and a factor of 1 on 2 pi squared from converting the sum over q into an integral. And this factor of 2 and a factor of 2 pi from the angular integral which removes that power. So what I get is 1 over pi k. And then doing this integral obviously gives me a logarithm. And the cut-off at small q comes from when 1 minus the cosine vanishes. So I have 1 over r here. And then the cut-off at large q is 1 over some short distance. So what I have is log of r over 8. And this is all in an exponential from working out the average of e to the i pi times the height difference. And so when you put it into an exponential with a minus sign, what you get is decay with a power of r. And the power is this pre-factor of the logarithm. So the summary of the calculation is here. The average of the square and height difference increases logarithmically with separation. And that gives you a spin-spin correlator that decays with a power of r. And the power is fixed by this parameter, the only parameter in the height model, the stiffness k. So at that stage, you can say, well, according to the exact solution, the spin correlator falls off like r to the minus 1 half. So that tells us the value that we need to put for the stiffness in height model. In order to match these things together, we need it to be pi over 9, and then everything fits. Now, of course, if that was all you could do, it would be profoundly disappointing because you feel that you just matched the height model to the exact solution in this single way. And you hadn't learned a great deal from going to the coarse-grained description. But the point is, as I'm going to try and convince you, that there's a lot more that you can understand by thinking about the height model, and it's those extra things that make it a worthwhile undertaking. So any questions at that point? Yeah. Yeah, well, what it does in a slightly vague way is take account of small-scale fluctuations. So the simplest of the small-scale fluctuations are these loose spins, which have three neighbors of one type, three of the other type, and can fluctuate. And so those are the things that are pinned by domain walls. But I mean, more generally, you could say, well, maybe I can have an excitation which I would think of as not completely point-like, but a small domain of one height inside a bigger region of a different height. And excitations like that are actually also killed in a region that has a longer domain wall passing through it. So the aim is that you're trying to take account of all of these local fluctuations when you write down an effective Hamiltonian that penalizes height-field gradients. But you're not being very precise about how you do it. And if this is any good, then we should be able to show that on the longest scales, this description is correct. And if it is, then for sure on shorter scales, there should be some corrections. And as I'll explain, you can do something to understand what those corrections are like and how they go away as you go to longer scales. Any other points? OK, so actually this is exactly the next point that I wanted to make. So we wrote down this effective Hamiltonian on the basis of the most simple argument. But now if you think of the spirit of, say, Landau theory, you should ask yourself what other terms you might write down that could be important and that are suggested by the physics of the problem, or at the very least are allowed by the symmetries of the problem? And probably the most obvious thing to worry about is the fact that we started off with these heights that I said had to take integer values. And then we quietly dropped that and thought of the height as a real valued field. So if you were trying to go back microscopically at least towards the notion of having heights that at least preferentially were integer valued, then something that you could do is include a second term in your effective Hamiltonian, which is a cosine of the height field with the 2 pi here so that it's integer heights that are favored and with some coupling constant g. And so then the idea is, if we study this extended model, is the extra term important or not? And you can actually do a very simple g calculation to find out whether it is important. And I thought I might as well outline that as well on the board. OK, so we want to think about the fluctuations of the height field in reciprocal space. And to begin with, we'd think about fluctuations with q in a range from 0 up to some maximum, which would be one over the short distance cutoff that I was calling a before, and I'll call this maximum capital lambda. And in an RG procedure, as usual, we'd have two steps. We will integrate out the short distance fluctuations. So in this context, that's going to mean that we'll remove modes with their wave vectors in a range between the cutoff and the cutoff divided by some scale factor b. And then we'll rescale things so that we get back to the Hamiltonian of the original form. And the question will be how the coupling constants and Hamiltonian change. And there's a simple point to make now, which is that I've written the coefficient of this cosine term as g, which is dimensionless, and then this factor of lambda squared, which is dimensions of wave vector, combines with the integral to give me something that's dimensionless. So if we look at the cosine, then we can obviously write it as a sum of two complex exponentials. So I have e to the 2 pi h of r. And I can separate that. I can, first of all, substitute for h of r in terms of a sum over Fourier components. So I have e to the 2 pi i. And then I have one on l squared and a sum over q. And first of all, I can sum over the wave vectors that I'm going to keep. And so this bit will keep. And then I have the rest of this sum. But since I'm writing this as two factors, I have e to the 2 pi i, one on l squared, and a sum over q from lambda on b up to lambda, and then the same thing in the exponential. And so this is the thing that I want to eliminate. And the process of eliminating it is exactly the same kind of calculation that we had to do when we were working out the correlation function. Because, again, we've got the exponential of something that has a Gaussian distribution. And we know the result will be the exponential of the average of the square with a factor of 1 half. So what this will give us is e to the minus 1 half s, where s is the average of the square of this. And the i, I've taken the count off with this minus sign. So I've got 2 pi over l squared squared. And a sum over q, the average of h of q squared. So the rest of the calculation is pretty much what I was doing before. So what this gives me is d2q from the lower cutoff to the upper cutoff and one on kq squared. So again, this gives me a logarithm. And what I have is 2 pi over k times the log of b. So I started out with g lambda squared. And now I should re-express the lambda squared in the terms of the new coupling. So I have g times, sorry, the new cutoff, g times lambda over b squared and a factor of b squared just to compensate. And then from the calculation I've just done, I've eliminated these fluctuations. And they've given me this factor in the exponential with a half there. So taking account of the minus sign, I have minus pi over k. So what have we learned from that? Well, if k is sufficiently small, then the whole power of b appearing here, 2 minus pi over k, will be negative. And what that tells me is that as I go to longer and longer distances as I keep iterating this Rg procedure, then the renormalized value of g will get smaller and smaller. And the fact that I close my eyes to the height field being integer at the beginning will not be important on long distances. On the other hand, if the stiffness had been much larger, then this could be a perturbation that grows and it could be relevant. So in other words, the Rg flow is 2g equals 0 for small k and to large values of g for large values of k. And the critical point, I'm worried that I've got a factor of 2 wrong somewhere. Yeah, no, no, sorry, this is right. So the critical point is when this exponent vanishes. And that happens when k is pi over 2. So what we decided from matching the algebraic decay of spin correlations to the results from the height model was that this stiffness k should be pi over 9. And since that's much less than pi over 2, we know that the discreteness of the height field is irrelevant in the physical example of the model. So I focused on that because it really is the most significant perturbation that you could add to the Hamiltonian. But of course, there are lots of others. And very much in the spirit of Landau theory, you would say, for example, what about higher gradients of the height field? And for sure, those would be included in a careful microscopic description. But these are always irrelevant under RG. And you could also have higher Fourier components of the height field, which would be more strongly irrelevant. You can check through the calculation I've just done. And if you have 4 pi or 6 pi instead of 2 pi here, then they wind up being more strongly irrelevant. OK, any questions on? Yes. So the question was, in the lattice model, H is defined only mod 6. So how does that come in? Well, I didn't intend to say exactly that H is defined mod 6. H is actually an unbounded variable. What I wanted to say was that the local properties of the spin state that you get from the height field are periodic with period 6. So that's an important fact that the Gaussian integrals that I was talking about were over a non-compact and unbounded height field. And the fact that we have a periodicity of 6, that just shows up in the translation from the height field back to the spin variables. Any other questions? Yes, and I'll come back to that kind of thing in more detail in a little while. OK, I guess we could have a five-minute break at this point, and then I'll continue with the same sort of arguments. Yeah. OK, so if you applied this rule to the picture on the right, then you'd be able to work out heights at all the sites in the lattice and at the centers of all the triangles in the lattice. And in particular, you'd get a height difference between the domains in the top and bottom half of the picture. And it's doing that kind of work that led to this picture here, which associates heights with the ordered states. But I didn't really mean to imply that you could deduce this picture by pure thought without going back to bigger pictures like this one. But provided you go back to the bigger pictures, then the fundamental rule that I explained is sufficient. OK, so far we've matched the behavior for spin correlations to what's known from the exact solution and got a value for the stiffness. And using that value for the stiffness, we were able to understand how it could be that the discreteness of the height field is unimportant because it's irrelevant under Rg. And what we'd like to do now is go on and use this basic picture to learn new things about the original antiferromagnet. And something that it's very interesting to think about is what happens if we don't remain exactly within the ground state manifold. And instead, we allow local excitations of the magnet, which instead of having two spins in one state, one in the other, have all three spins in the triangle in the same state. And then you see that applying this rule as we go around the triangle, the height field, in fact, decreases by six. Or if we were going around an upper-pointing triangle, it would increase by six. So basically, what we're saying is that in the presence of excitations out of the ground state, this height field is no longer single-valued. Instead, you have these sort of vortices. And as you wind around one of the excitations, the height field increases by this periodicity six. So that's something that we need to work into the height model description. And I'll do that in a moment. But before I get there, it's interesting to think in some detail about the process of generating these excitations. So suppose we start from a system which is in one of its ground states. And then we take a spin, say, this one that's colored red in the top picture, and flip it so that we have two triangles which are excited in the way that I've been talking. So if we follow height differences on some long contour that encircles both of these triangles, then we'll find that the height field is single-valued because we haven't disturbed the spin configuration that we had in a ground state. But if we wind around a single one of these triangles, then we'll be sensitive to the vortex and height field. And what's more, we can separate these two defective triangles by flipping further spins without costing us any extra energy. I mean, it took some energy to create this excitation and to have two triangles that are not in the ground state. But we can find spins which we can flip which move these excitations apart. So I've got a sequence of spin flips here where, for example, this spin that I'm picking out with a laser pointer now has three neighbors of one orientation and three of the opposite orientation. So it can be flipped at zero energy cost. But when I flip it, I move one of these defect triangles one step to the right. And then I can find another spin that has three neighbors in each orientation and flip it. And move the defect triangle further. So in this way, we have a classical example of fractionalization in the sense that originally we create an excitation. The excitation is this pair of triangles. But still moving within the ground state, I can separate that, in fact, into two independent excitations. Yes. Yes. Well, it may be four vertices if we think what happens when we complete the system around the edge. I mean, I would like there to be an even number of vertices. I'm not sure how much, how many options we have for completing the bit of the lattice that I haven't drawn. We have to fill in two sites. Let's see. So if we make these two sites, we have to make one of these sites at least read. Yeah. OK. I'm not sure I'll think through it. I think you will find that however you complete the picture, you'll get a situation where you generate an even number of vertices, so probably four vertices. Sorry. Are we not allowed to do that? Well, all right. I would like to say there's periodic boundary conditions. And then with the boundaries, we make up complete triangles. I mean, I'm happy to work out the details afterwards, but probably I'll get myself in a mess if I try and do it in real time. Yes, probably, but I'm not sure I'm envisaging exactly what was said in those lectures. But we could talk about it again afterwards. Oh, yeah. Yes, so the precise sense in which the sign changes is that I want to have a convention that I go around the upward pointing triangles in the clockwise direction. And then the downward pointing triangles I am forced to go around in the anti-clockwise direction. Otherwise, there's a conflict on the common bonds. I think I ran out of symbols to mark the vertices. I mean, it was just a convenient symbol. Yes, it could be 1 plus 1 minus. Yes. OK. So now, if we think about excited states in the system, but we're imagining low temperature states, so the density of these defect triangles associated with the vortices is low, then we can ask how the presence of these defects affects the entropy of the background of the spins in the triangles that are in their ground states. And the answer is that the presence of these vortices has a significant effect on the entropy of the rest of the lattice. And we should be able to understand what that effect is using this same effective height model that we've talked about already. And so the point is, suppose we flip a spin and make a pair of vortices and then separate the two vortices by a large distance in the way I was explaining in the previous slide. And we just focus on one of those vortices. And we ask what the gradient of the height field is like as we go around the vortex. Well, the height field has to change in value by six units as we go around the vortex. So the gradient in the height field will be around the circumference of a circle around the vortex. And the magnitude of this gradient, if we distribute it uniformly around the circumference, will be six divided by the size of the circumference 2 pi r. So we can work out the entropy cost of introducing a vortex into a certain sized region using this height model. So the entropy cost, the argument of the exponential, just the integral of the gradient of the height field. So this is k over 2 and then a 2 pi from doing the angular integral and 6 on 2 pi r squared is the square of the height field gradient. And then we should integrate this over radius. And we have some lower cutoff, the lattice spacing, and some upper cutoff, which is maybe the distance to the other vortex that we generated. And this entropy cost, if we work it out, is 9k over pi. So now we can ask, and this is maybe in the spirit of the lectures on the 2DXY model and vortices in two dimensions, we can ask if we make a pair of vortices in the way I was showing here whether they'll get separated or not. And as we separate them, we pay this entropic penalty in terms of the fluctuations of the background. But of course, there are more places to put a vortex when it's been separated to a certain distance. So there's an entropy gain, which is just the log of the number of places where you can put it. So the number of places where you can put it is the separation divided by the lattice spacing squared. And so we take a log of that. Well, this is here from doing the integral that was supposed to be a log of r over a. So this is gain. This is cost. And if the cost is bigger than the gain, then the vortices will be bound together. And if the gain is bigger than the cost, then they can separate. And of course, the distinction between the two situations is exactly the KT transition. And so the cost is small if K is small. And the gain is fixed. And so there's some critical value of K separating these two regimes. Yeah, sure. So the question is, why are we talking here just about entropies and not energy? So the point is that it's basically the same physics as we talk about in the Kosselitz-Thalys transition. But it's here driven entirely by entropy. So the point is that apart from imagining we've introduced a pair of vortices in the system, we're otherwise considering it to be at zero temperature. And so we're really focusing on the number of different ground states available to the system. So this entropy cost is basically saying if we consider all the states of the system which are ground states apart from the fact that we've put a vortex anti-vortex pair in these specified locations, how does that number of ground states compare with the number when the vortex anti-vortex pair has a different separation? And what we learned from this calculation is that the number of ground states available to the system goes down, the entropy cost goes up as we separate the vortex anti-vortex pair. And so that's one type of entropy. And the other type of entropy comes just from the fact that there are more places to put the vortex when it's got a large separation. And we're trading those two types of entropy off against each other without having temperature in the problem at all. OK. So you could ask where the stiffness for the unbinding is located in terms of the value of k that we derived for the spin-a-half ising triangular lattice anti-ferromagnet. And what you find in fact is that vortices in that case are unbound. And so if you introduce vortices in the system at all by going to non-zero temperature so that these excited triangles are present, then as soon as they're present, they're unbound. And they give a finite value for the correlation length in the system. So you can understand that in this model at non-zero temperature, there's a finite correlation length whose value is basically determined by the density of these excited triangles, which in turn is determined by the Boltzmann factor for generating them and therefore which diverges like e to the j over t as you go to low temperatures. So the final point that I wanted to talk about in connection with this model is that there's a rather nice way of tuning the stiffness so that you can access not just the behavior that we can get from the exactly solvable conventional ising model, but also access behavior corresponding to other values of the stiffness. And the thing that you have to do is stick with the ising model, but go from the two-state model, which you could describe as spin-a-half, to a model with spin-s, but still with ising interactions. So to connect with all the things that I've been talking about, we can think about ground states for a single triangle in this spin-s model and we can minimize the energy of one of the interactions by taking one spin to have its maximum possible value of plus s and the other spin to be oppositely aligned with size for the z component of minus s. But then the exchange field on the third site vanishes and so it can fluctuate between all these 2s plus 1 values. And that means that the entropy in the ground states is larger as we increase the spin. And it means that the entropy penalty for introducing domain walls between these different locally ordered regions is also correspondingly higher. So as we increase the spin, we can increase the entropy penalty of gradients in the height field. And this parameter in the model k, the stiffness, is precisely what tells us how much entropy penalty there is associated with gradients in the height field. So by tuning s, we can vary k across the whole range. And what you find from Monte Carlo simulations on the Ising model is that you can go through the different critical values. So the results of the calculation were that we had, firstly, for the spin half model, we had k equals pi over 9. And then we worked out that if k is bigger than pi over 2, these cosine terms are relevant. That means that the height field is locked to a particular integer value. And then we found that the vortices become bound if k is bigger than 2 over 9. So in here, the vortices are bound. And in here, they're unbound. So then in the height model, you have k as a continuous variable. And in the Ising model, you have discrete values of spin. And so what you can try and do is find for each value of spin where you're located on this range of k's. And so what's found is that this is 5 half spin is here and 3 half spin is here somewhere. And 6 half spin 3 is the other side of this transition and so on. OK, so I think there's time to start talking about a new topic. And I've gone into this example of the triangular lattice antiferromagnet in quite a lot of detail because I think it shows in a useful way two of the main ideas which I want to introduce in these lectures. The first one is the idea of an emergent degree of freedom, which is the height field. And the second one is the idea of classically fractionalized excitations in the way that I've talked about with excited triangles. And if those ideas only applied in this specific context, then they wouldn't be so important. And this example wouldn't be so important. But in point of fact, they hold equally in other contexts both in two dimensions and in three dimensions, and that's what I want to go on to. Yeah, yes, yes. No, no. So apart from spin half, what you have to do is Monte Carlo simulations. And it's the asymptotic long distance form of the correlation function that you're trying to probe. And actually, one of the points of this paper by Zeng and Henley was that it is in fact more effective to analyze the Monte Carlo simulations using the language of the height model rather than the language of the spin degrees of freedom. OK, so the general direction that I'm going in is to try and show you that these ideas are relevant more widely. Oh, I'm sorry. There's one more point that I wanted to make in connection with the triangular lattice model. And that's to do with what happens when you put the system on a torus. So on a torus, we have periodic boundary conditions on the spin configuration. And if you hadn't thought very hard, you might just have assumed that that would also mean that you had periodic boundary conditions on the height field. But in point of fact, that's not the case. And to try and make it plausible that that's not the case, I've drawn a particular example of ground state here. And if we follow the rule that I was talking about around these triangles on the bottom, you can see that the height field changes by two units as we go from one site to the next along this bottom rung. And you can convince yourself that it changes by two units along any of these horizontal lines in the lattice. So on the one hand, we've got spin configuration that we could clearly use to satisfy periodic boundary conditions in the horizontal direction if we wanted to. But on the other hand, the height field increases steadily in the horizontal direction. And so clearly, the height field itself can't satisfy periodic boundary conditions. But the mapping that I talked about from heights back to spins, and in particular from these ordered states back to spins, involved the height field modulus 6. So the final result is that when the spin configuration satisfies periodic boundary conditions, then the height field satisfies periodic boundary conditions in both directions, modulo and integer, which is a multiple of 6. So that means that the configuration space for ground states of the model split into different sectors according to the integers that fix the boundary conditions. So these integers, which are called winding numbers, label the different sectors. And if we just rearrange spins locally within the set of ground state configurations, then it's impossible to change these winding numbers. To get from a configuration with one set of winding numbers to another one, what you have to do is create a pair of these vortex excitations and then take a vortex around the system. And because of this winding of the height field as you go around the vortex, it means that somewhere along a line joining a vortex-anti-vortex pair, there's a step in the height field of height 6. And if you make a pair of vortices, move one of them around the torus and bring them back together and annihilate them again, you get back to a new ground state, but it's in a sector where one of these integers has changed by the allowed values. So that's actually one of the additional generic features of these systems with emergent degrees of freedom that I want to stress. You have the emergent degrees of freedom. You have fractionalized excitations. And you have different sectors to the space of configurations. So that gives you a summary. And now if we want to think about other systems that allow a similar sort of description, then something we can do is move to what are called dimer models. So there's actually a way of thinking about the triangular lattice antiferromagnet, which makes a connection to dimers very directly. And I'll explain that in a moment. But what I mean more generally is that a dimer is some object which you can imagine placing on the nearest neighbor links of a lattice. And a lot of the time we want to consider close back dimer configurations, which are ones where you've placed the dimers on the links in such a way that every site is touched by exactly one dimer. That's to say there are no sites with no dimers on and no sites with more than one dimer. And the relationship between the triangular lattice antiferromagnet and a dimer model is like this. So we start with spins on a triangular lattice. And then for the dimers, we think about a honeycomb lattice, a hexagonal lattice. And we can map between a spin configuration and a dimer configuration by taking nearest neighbor pairs of spins that are parallel. And if a pair of spins is parallel, on the bond of the hexagonal lattice that crosses the corresponding bond on the triangular lattice, we put a dimer. And if the spins are anti-parallel, we don't put any dimer. So then you see that the sites of the hexagonal lattice are the middles of triangles. And in a ground state of the antiferromagnet, each triangle should have one bond with a pair of parallel spins, which means that each site of the hexagonal lattice will have one dimer going from it to one of the neighboring sites. And what's more, because every triangle must have one frustrated bond, every site of the hexagonal lattice must have exactly one dimer touching it. So that way, you have a mapping between the ising problem that I've been talking about and configurations of dimers on the hexagonal lattice. But once you have this idea of dimers on a lattice, then you're not tied to the hexagonal lattice. And we can go, for example, back to the square lattice. So here, for example, I have an allowed configuration of dimers on the square lattice. Every site has one diamond touching it, and none are empty, none have more than one. And in principle, you could imagine generalizations where instead of having one dimer touching every site, you might have n dimers touching the site. But if you make the generalization, then the requirement is that every site in the system abases generalized rule with, as it might be, two dimers touching it. OK, so again, we have the question of whether we can construct some kind of coarse-grained description of these configurations. And we can. And in two dimensions, the way we do it, not surprisingly, turns out to be the same as the height model that I've been talking about, regardless of whether we start from the hexagonal lattice or use the square lattice or some other bipartite lattice. But in the dimer language, it's more easy, perhaps, to see how things carry over to three dimensions. So one way of thinking about the emergent description is not to necessarily go directly to a height model, but instead to think of some kind of flux which flows along the bonds of this lattice. And here it's important that the lattice we're talking about is bipartite. We can divide the sites into two sets in such a way that the nearest neighbors of the sites in one set are members of the opposite set. And we can assign a direction to the links on the lattice to be, say, from the A sub lattice to the B sub lattice. And then we'll have a rule which gives us the strength of the flux along a link, which, if we have z nearest neighbors, is z minus the number of dimers touching each site on the links that have a dimer there and minus this number of dimers on the empty link. So if we stick with the version of things where the number of dimers touching each site is just 1, which is what's drawn here, then on the square lattice we have a flux of plus 3 in this direction from the A sub lattice to the B sub lattice where dimers are present and a flux of minus 1 on the other links. And this picture on the right is supposed to be a translation of the picture on the left invoking that rule. So we have a flux of 3 represented by these three arrows corresponding to the presence of this dimer. And it flows from this site, which say is the A sub lattice to that site, which is B sub lattice. And on this link, because there's no dimer there, we have a flux which is smaller in magnitude, it's one unit rather than three, and it's directed in the opposite direction. So the point about that rule is that it's set up so that if we have a dimer configuration with one dimer touching every site, then the amount of flux into the site will match the amount of flux which is flowing out of the site. And unless I made mistakes, if you look at the vertices of the lattice in the picture on the right, you should be able to see that where there's three units flowing in, there's three times one unit flowing out at every vertex. Okay, so that gives you a translation from dimer configurations to this idea of fluxes. And like the idea of heights, the point of the fluxes is that it's something that you can easily think about coarse-graining. So with the heights, the important thing was that as long as we remained within ground states, the height field was single-valued, and once you've got something simple and single-valued, you can think about coarse-graining it. Here, the point is that because the amount of flux flowing in and out of every vertex is the same, this flux field is divergence-free, and again, that gives us a way to think about coarse-graining it. So the other ingredient that we want to think about analogous to what we discussed in the case of the triangular lattice anti-ferromagnet is how entropy arises. So entropy will mean different ways of arranging these dimers consistent with the overall rules. And for instance, I can identify closed loops on the lattice where as we follow the loop, we alternately go along a bond with a dimer, go along a bond with no dimer, along one with a dimer and so on. And on a closed loop like that, you can reorientate the dimers, simply shuffle them along the loop one step without changing anything in the rest of the system and without breaking this rule for the dimer model. So in other words, I can rearrange this pair of dimers. So instead of the pair being vertical on this plaquette, they're horizontal on the plaquette, and that's also an allowed configuration. So when we think about the entropy of some coarse-grained version of this B field, we need to think about the entropy associated with local rearrangements in different dimer configurations. So for example, we can compare what happens with this dimer configuration on the left with the dimer configuration on the right. And because we can rearrange independently the dimers on each of these plaquettes, having them either as a vertical pair or a horizontal pair, there's some high local entropy for this left-hand configuration. But on the other hand, for the right-hand configuration, I could only make a global rearrangement of the dimers. I mean, global on the scale of this picture, I'd have to follow this green path somewhere out of the picture in order to form a closed path. So some of these configurations cost entropy, just as steps in the height configuration cost entropy. And then the other point, if you want to read this back to fluxes, is to realize that the green lines that I'm drawing correspond roughly to flux loops. So for example, if we go from this picture on the left to one on the right, we can see that as you go around the green loop, the flux on all the links is in the same direction. It's true that it doesn't exactly correspond to a flux loop because the strength of the flux here is stronger than the strength here, but more or less it does. So what that means is that in the left-hand configuration, we have a lot of short closed flux loops, whereas in the right-hand configuration, we have some long extended flux loops. And then if we do some coarse-graining, then when we average over a lot of closed flux loops, we end up with a net result of zero. Whereas if we try and average when we have these extended flux loops all carrying flux in the same direction, the result is still something with large flux. So the picture is large flux has low entropy and low flux has high entropy, so we want to write down the effective theory that encapsulates that. And because we're talking about fluxes with zero divergence, we should think of this flux B, for instance, as coming from the curl of a vector potential. Yes? No, no, I didn't want to connect the left to the right. What I wanted to do was contrast what happens if I coarse-grained on the left with what happens if I coarse-grained on the right. So the picture is if I coarse-grained on the left, I mean in this top loop, I've got flux going probably left to right along here, but here on this loop, I've got flux at the top going right to the left. So if I coarse-grained, I average those two and end up with zero. So the picture is that I coarse-grained on the left and I get something small, I coarse-grained on the right and not much changes. Okay, so we want to write down an effective theory that encapsulates that. And to build in the fact that B is divergence-less, we could write it in terms of a vector potential, or we should at least remember that it's divergence-less, but we can write the effective theory in terms of B itself. And if we do that, what we want to do is penalize large values of B because those were precisely the ones with high flux and low entropy. So what we should do is write down an effective theory which involves an entropic penalty for large values of B. And in two dimensions, a particularly simple way of arranging things is to say that the vector potential is in the direction perpendicular to the space that we're considering. So it's in the direction fixed by the unit vector Z, but then its magnitude is just given by something which turns out to be the height field that we were talking about before because if we write down, if we use this form for the vector potential and we write down mod B squared in two dimensions that translates to the gradient squared of the height field. And so in general, with these dimer models, we wind up with exactly the same two-dimensional effective theory that I introduced for the triangular lattice antiferromagnet and the only thing that can change according to the lattice that we study the dimer model on is this value of the stiffness. And in general, integer values of the height field or of the, yeah, ultimate, the integer values of the height field will be preferred. And so according to the lattice, it may turn out that this locking term is irrelevant or otherwise that it's relevant. And if it's relevant, then it selects some kind of crystalline arrangement of the dimers, but if it's irrelevant, then we go to the sort of disordered states with power or correlations that we had for the triangular lattice antiferromagnet. And when we go to the three dimensions, it turns out that there are no small perturbations that you can write down to this effective theory that are relevant in the RG sense. And so you really do have a stable description of the long distance physics with just this stiffness as a microscopic parameter that comes from the theory that you started with originally. Okay, any more questions? Okay, so I think I'd like to stop there and really the message has been the summary that I had at that point. I know, continue tomorrow, thank you.