 OK, thanks. So it feels a great honor to be giving the final talk at this wonderful conference. For me, it seems very appropriate that the meeting is in Italy, because actually it was in Italy in Villa Gualino in the late 80s that I first met Boris. And it was certainly a very exciting time then to actually see in the flesh him and other people whose papers I'd been reading. All through this week, I've regretted the fact that we didn't take any photographs back then, so I can't show any images from good old times. So I just have the memories. So the topic which I'm going to talk about is not really to do with nanoscience or mesoscopics. It's a topic in statistical physics. There is a sense in which you can argue that it's a kind of distant cousin of things that we know and love in mesoscopic physics. And the argument goes like this. One of the surprising things, perhaps in mesoscopic physics, is that the lowest energy scales on the energy scale smaller than the soulless energy, even though the geometry of a sample is completely irrelevant, there's still lots of interesting physics, for example, in level statistics. And the problem that I want to talk about is one that involves very long length scales, where the details of the system are unimportant. But nevertheless, there's interesting probability distributions for the quantities that we're looking at. So I want to talk about statistical problems where you have a kind of soup of loops. And the specific question is what the length distribution is of these loops, particularly the long ones where there's a chance of something universal. So this was work in collaboration. And the specific part that I'm going to talk about is really just one aspect which I've picked out from a much larger body of work. So this is a cartoon of the kind of problem that I want to think about. So we have a situation which I'll talk about more in a minute that gives us some statistical ensemble of random curves. And there are basically two ingredients. So one is some local rule which tells us how different pieces of these curves link together to form loops. And the other ingredient which you might have is the possibility to cover each loop one of n possible colors. And this is a little bit like a number of components for a spin in a spin problem. So problems of this kind come up in all sorts of contexts. I'm really mostly interested in what happens in three dimensions but let me start with a picture in two dimensions. So if you want to define some ensemble of random loops in two dimensions, you can start with a smooth random scalar function and say that your curves will simply be the zero lines of the scalar field. And in other words, take them to be the boundaries or holes of percolation clusters. So in two dimensions, if we do that, in most circumstances, we'll find that the loops have a maximum characteristic size. But we also know that by tuning the random potential so that the percolation threshold is at zero energy, we can make this size diverge. When we go to three dimensions, we have the possibility of two distinct phases. So one way of getting random curves in three dimensions is simply to generalize the two-dimensional problem. And of course, to have lines as zeros of random function in three dimensions, we need to take it to be random two-component field. So for instance, a random complex field. And then as we go around one of the zero lines, the phase of the field will wind by 2 pi. So again, we can control the length of the loops by varying the average of the random two-component field compared to the magnitude of the fluctuations. So if the average is large and the fluctuations are small, most of the zero lines will correspond to short loops. But on the other hand, if you, for instance, tune the average to zero, then generically what happens is that some fraction of the strands of these loops lie on curves which extend right through the sample. And it's actually those extended curves that I want to talk about in this talk. So there are all sorts of situations where people have studied random curves like this. So for example, people thinking about cosmic strings did quite a lot of numerical simulations to determine things like the fractal dimension of random curves. We learned yesterday afternoon that cosmic strings can be excluded by observations now. But fortunately, there are other situations. For instance, if your random two-component field is an optical field, then you can have forces in it. If you have a liquid crystal, then the disclinations in three dimensions form closed loops. And then one example, which is maybe closest to mesoscopic, if we think about a complex random wave function, so an eigenstate in a system with broken time reversal symmetry, then it's nodes, again, form an ensemble of random loops. Another place in quantum mechanics where random curves come is if we think about the Feynman path integral for a post-gas in imaginary time. And then, of course, one of the rules is that we should think about trajectories of particles which take us from some initial configuration at time 0 to some permutation of that configuration at imaginary time 1 over t. And to relate that to loops, I can project along the imaginary time direction and draw these trajectories in the space dimensions. And if I'm at high temperatures so that the imaginary time direction is short, then these trajectories will typically either return to the same particle after propagation in the imaginary time, or maybe correspond to exchanging small numbers of particles. I'm drawing pictures here in two dimensions because that's all I can manage. But really, I want to think about a system with three spatial dimensions as well as imaginary time. And then we know that if we make the imaginary time direction long enough, we can have both condensation. And in the language of these loops, that corresponds to going from a situation where I have only microscopic loops to one where some microscopic fraction of the particles belong to extended loops. So the kind of question which you can ask there, and this is the kind of question I want to focus in on, is what the distribution of lengths is for these very long loops. So this slide is intended to be a bit more specific about the kind of questions I want to probe. So suppose we have some ensemble of random curves, and we measure the length of each loop. And so we have a total length of the loops as a sum. We can study the distribution of loop lengths, and it's natural to weight this distribution with the length of each loop and normalize it using the total length so that we end up with something which is normalized to one. And then this distribution function will have really two components to it. If we look at microscopic distances, it turns out that these loops are Brownian, and so the probability of return to the origin you can get by thinking about diffusion, and so the probability distribution of loop lengths decays in d dimensions like length to the power minus d over 2. And this type of behavior you can expect to continue with increasing loop length up to a size of loop length set by the system size, which is simply the loop length that you need to go before discovering that you're in a finite size system. And then at longer distances, well, these are really the macroscopic loops which I want to talk about, and their maximum length extends as far as a size set by the volume of the sample. So really the question is, what's the nature of the probability distribution in this regime? And one important point is that you can separate the population of loops into two components, as I'm showing here, because this part of the probability distribution, if we're in more than two dimensions, decays fast enough that its integral is finite by itself. But then the second part of the distribution can make a finite contribution to the normalization because of the very wide range of length scales from L squared to L to the power d. OK, so this is intended to define the kind of questions that we want to ask. And this is the outline of how I want to talk about answering those questions. So first of all, we'd like a cleaner definition of the problems we're talking about, and a convenient way to do that is to define everything on the lattice. But then we'd like to get from there to a field theory, and I'll outline how you do that in two steps. So first of all, you can go from thinking about loops on the lattice to a kind of generalized spin model, and you do that by choosing a spin model which gives you the loops as a high temperature expansion. And then you can use the symmetries of the spin model to identify an appropriate field theory, which is a kind of sigma model. And you can think about loop length moments as observables in the sigma model. And by calculating these moments, you can, in fact, identify the limiting distribution, which turns out to be something known as Poisson Dirichlet distribution, which is something I'd never come across before, but is something which is studied a lot by mathematicians. Actually, I've only talked to two physicists so far who were familiar with it already. One of them was French, and one of them was Russian. So I'm not sure what that tells you about education syllabuses. OK, so to get on with that program, I want to start by defining loops on a lattice. And we could think about different categories of loop problems, but the simplest one to talk about and the one I'll restrict myself to is one in which the loops are directed. And to get a dense system of loops, we can start with a lattice with directed links and then break the nodes up in such a way that the lattice is decomposed into a set of loops. And we're doing it stochastically so that a given node will be separated in one possible way with one probability and in the other possible way with the complementary probability. So obviously, there's a great deal of freedom for how we choose our lattice and so on. I'm going to draw pictures with a kind of Manhattan lattice where we have alternate avenues going north and south and alternate streets going east and west. And then we want to think about what happens as we vary this probability p for breaking the nodes up in the two alternative fashions. So at one extreme, if we break the nodes up in the way I've drawn at the top, we decompose the system into a dense set of very short loops. And if we do things in the alternative way, then we just have extended trajectories which run through the system. But more generally, if we have some intermediate value of p, we'll get some combination of short loops and longer trajectories. And if we do that kind of thing, not in two dimensions but in three, and we study it on a computer, then we can get a phase diagram like this which includes transitions between the localized phase and the extended phase. And the transition might either be continuous, which is what this blue line indicates or first order, which is what the red line indicates. So my focus is what happens when we have these extended loops and that's what I'll carry on to talk about. So the next job is to get from this lattice model eventually to a field theory. And of course, one feature of loop problems is that the degrees of freedom you have, not local objects, they're extended. And it's much more convenient to have something that's a local object as a starting point. So n is the number of colors for each loop. So the point is that if you have many different colors that you can associate to each loop, then you have more entropy associated with configurations with more loops. And so you favor short loop phases. So that's why the phase boundary bends in that way. Yes, so this is a crossing point. But the important thing is that they avoid each other at the crossing point. No, no. There are only two parameters, this quantity b, to control how much of the two kinds of nodes I have and then n to control the number of colors. With regard to colors, yes. So the extended phase has at least some fraction of the links on extended loops. OK, so it's not very convenient to deal with these extended objects loops. And instead, we'd like some local variables. And I want to pass over the details of this. But basically, you can introduce unit complex vectors on each link and choose some action or Boltzmann weight so that you reproduce the high temperature expansion when you reproduce in the high temperature expansion the loop model when you integrate over the orientations of the spectres. And when you go through the details of that, which I'm hiding, you can identify the symmetries of the Boltzmann weight or the action that you need. And you can use those symmetries to write down a continuum field theory and then use that continuum theory for calculations. So there are two kinds of symmetries, a global one and a local one. And the field theory that you end up with is a sigma model. And the reason for that is that the microscopic variables are fixed length unit vectors. It turns out to be a sigma model on complex projective space. And that's because the local symmetry tells you that the phase of these complex vectors is irrelevant. And so the field in the sigma model is something that's bilinear in z and it's complex conjugate. And there's a global symmetry which tells you that the sigma model has to involve just gradients of this field. So there's a kind of dictionary between the loop model and the sigma model. So the phase with only short loops corresponds to the disordered phase, the paramagnetic phase of the sigma model. The phase with some loops macroscopic corresponds to the ordered phase. And the fraction of loop length that's in the macroscopic loops corresponds, in fact, to the magnitude of the order parameter in the sigma model. And you could go on with that dictionary, but the entry that's important for us is to do with calculating correlation functions. And the point is that if you insert one of these Q fields into an average, then it tells you that you have a loop of one color with the color fixed by one of the labels on the Q field arriving at the point where the Q field is inserted. And a loop of another color leaving that point. So if you, for instance, evaluate a two-point correlation function in this theory, then it tells you about the probability of having a loop which passes through the origin and through the other space point in the expectation value. It's on complex projective space. So, yeah. OK, so that way the two-point correlator tells us about the probability of a loop passing through two points. And we can take that further. And if we want the probability of a loop to pass through three points, a, b, and c, then we can calculate that as an expectation of three of these fields with the indices chosen so that the loop has to change color as it goes through the three points. And if we want to calculate moments of the loop length distribution, then we simply have to integrate these averages over all positions for the arguments of the Q fields. And here we have a crucial simplification if the loops that we're focusing on are the very long ones, then we're interested in these averages when the space points are far away from each other. And for those, we can, in the ordered phase, think of the Q matrix as being approximately the constant, and the average that we need to do is simply an average over different orientations for the order parameter. So we basically can evaluate moments of the loop length as a kind of zero momentum mode average over directions of the order parameter. So when we do that, we can get results for all of the moments of the loop length distribution and they involve factorials together with powers of the total length that's in macroscopic loops. So of course, just looking at these moments is not very illuminating, but fortunately, from the moments, you can go back to the probability distribution. And the probability distribution is this one which, as I said, is familiar to mathematicians known as Poisson Dirichlet distribution. So what I want to do is explain what this Poisson Dirichlet distribution is like and there's a nice construction of it which is called a stick breaking construction. So imagine that we start with a stick of unit length which corresponds to the total length of extended loops in my system. And then I'll break a piece of this stick and I'll choose the length of the piece that I break off with a certain distribution which has one parameter in it. And then I'll take the remaining piece of the stick and I'll break a fragment of it again with the same fraction of the total length as in the first case and then I'll repeat that process indefinitely. So as I go on, I'll get a sequence of fragments which will tend to be shorter and shorter, although not necessarily at every iteration and their total length will add up to the length of the original stick. And so what the calculation of loop length moment shows is that the lengths of the long loops measured in units of the total length of macroscopic loops have precisely this Poisson Dirichlet distribution and this distribution contains one parameter and that parameter is fixed by the number of colors that are available to the loops so that if that number is large then the distribution is skewed towards shorter loops whereas if the number is small then the distribution is skewed towards longer loops. So actually it was quite a surprise to me that one can even write down probability distributions for an infinite number of variables which are neither simply Gaussian's nor so complicated that it's impossible to understand them and I think this occupies that middle ground. To test these predictions and maybe think a bit more explicitly about them it's convenient to go from the whole distribution for an infinite number of loop lengths and integrate out all but one of the variables and to focus in on the length distribution of a single loop. So this is the quantity which I had near the beginning of the talk and I argued that at short distances it decays like length to the minus d over two and the question at the beginning of the talk was what form does it have on longer distances and this result from the Poisson Dirichlet distribution gives us a specific concrete form that depends on the number of colors and of course you can test that in numerical simulations and these different curves for different values of n lie on top of the theory rather precisely. So to my mind there's just one question remaining which is why is it that we get such a simple and universal distribution in this problem where you might have thought that things like sample geometry, details of boundary conditions and so on could all be important and it turns out that there's some very nice intuition that comes from mathematicians and actually I find the papers by mathematicians in this area quite hard to read so I don't claim to have a complete listing of the relevant names. There's certainly one important work that was done by Schram the person who also worked on SLE and I learned a lot by reading reviews written by Daniel Ullechi who's in the maths department at Warwick. So the problems that mathematicians can actually get under control are ones without any of the spatial structure that I've been talking about so they're ones which we would classify as mean field problems but to say things about them the mathematicians have a nice technique and I think that technique also at least on the level of physicists gives us insight also into problems with spatial structure and the insight comes from adding an extra layer to the problem which involves introducing some dynamics so that's to say we imagine under this dynamics we can reconnect from one type of node to the other so if you're doing Monte Carlo simulations on these problems then this is precisely the Monte Carlo dynamics that you would use and for instance if the probabilities of these two alternatives are equal then you'd expect that you should be able to flip between them and leave your loop length distribution statistically invariant so of course if you flip between two nodes then depending on how they're connected together it could be that you're taking two short loops and converting them into a single longer loop or it could be that you're going in the opposite direction and if you pick a node at random and ask for the probabilities of going in each direction then clearly those probabilities will depend on what the loop length distribution is in your ensemble because if there are lots of shorter loops it's more likely that you'll go to the right and if there are lots of very long loops it's more likely that you'll go to the left so it's clearly a strong constraint on the loop length distribution that it should be stable under these dynamics and the point about the Poisson Dirichlet distribution is that it's stable under just these dynamics which the mathematicians call split merge processes if the probability of the kinds of loops that go through these nodes is determined only by loop length and clearly in a problem that's got mean field character loop length is the only thing that can control the probability of a strand that you pick at random belonging to a loop of a certain length so if we go beyond mean field problems the point is that the loops that I'm focusing on ones which have a length that's long enough to fill the entire volume of the sample which means that they cross the sample many times which gets you into this mean field regime and so maybe the surprise is that in this mean field regime as in mesoscopics below the Thales energy we have interesting phenomena still including in particular this Poisson Dirichlet distribution so that's it the ideas where the loop models can be represented by sigma models on complex projective space and we can get the joint probability distribution of lengths for long loops or at least its moments by computing these zero momentum averages these finite dimensional integrals in the sigma model and the loop length distribution has this very simple form even though it's a distribution for an infinite number of variables and this form is the Poisson Dirichlet form, thank you.