 I decided to pack everything into one last lecture. It's cheating a little. I was supposed to give you less topics, but it is not really cheating, because I don't really expect you to be able to pass an exam on this kind of material, just for you to hear something and be familiar with it. OK, we were here. So this is a list of models which are statistical physics models here, which we know correspond to these central charges, which are listed on the other column. So for example, a billion sample model, which you have been doing, the self-avoiding walk, the Carter Parisi Zhang model, and so on, a lot of famous models. And we know that they correspond to these central charges, because we know that this central charge gives the right exponents for them. However, now that I have a central charge in terms of diffusion of SLE kappa, I now choose kappa to correspond to this central charge. And here we observe the kappa. And one thing to observe is that two values of kappa, two and eight, for example, correspond to the same central charge. But they are not exactly the same statistical physics models, but they are related. As you know, for example, uniform spanning trees are related to the A-billion sample model. And also the loop-erase random walk is the border of avalanche on the A-billion sample model, or is the minimum distance on a UST. So in this way, these are models which are connected somehow. They have the same central charge. But they're kappa changes, because the stochastic path that is connected with the cluster frontier changes. And hence, the fractal dimension changes, I hope that I can explain to you how we calculate the fractal dimension. And you see the fractal dimension increases until it becomes two. OK, something to say before going to the next slide. See the fractal dimension here has to be related to the fractal dimension here, because these two models are dual to each other. And how are they related? The relatedness is through one of them being the hull of the cluster, and the other one being the external perimeter. In this case, hull and external perimeter give you the same thing. But in these models, you get different relations. And this is the relation between the external perimeter and hull fractal dimension, which are connected together by this relationship. And in case you are not familiar with the concept, when you have a cluster like this, you can form what is called a hull, which in this case is black plus red line. So it goes into the fjords. It is more fuzzy than the external perimeter, which is black plus the dotted line. So both of them, in case in the critical model, will be fractals, but with different fractal dimensions. In fact, a little external perimeter will be less fuzzy than the hull. OK, without saying why, on this slide, I have all the results which correspond between SLE and CFD. I know now that SLE is a conformally invariant model. So it has to be connected with conformal field theory. And it is connected through the central charge, as the conformal field theory is only characterized by the central charge. You just give the central charge in the Verosora algebra. And the rest happens. Concept? No, it doesn't have to be concave. What it does is that it makes these jumps. It makes these jumps. But you see here, this one is convex. I have a conformal field theory with this central charge. And you can test that this expression doesn't change under this transformation. It shows that I have two theories corresponding to the same central charge. But the two theories somehow are dual to each other. They are not identical. Then as you saw, we have operators connected with the conformal field theory. But I have two types of operators for which the conformal dimension is written here. And the thing is that since SLE happens in the upper complex plane, I have two types of operators, operators which act on the border, which I call the boundary operators, and then operators which act in the center. And they have different conformal dimensions which come out of this expression. It is possible to show that the fractal dimension of a curve that connects two points is related to this boundary conformal operator. So these SLE paths usually go up like that. And they don't have an end. So you just need one operator here to create it. And therefore, you get this weight here. And from there, for N equal to 2, you get this relationship for the fractal dimension. So for this SLE path, the fractal dimension is equal to 1 plus kappa over 8. And since I have that duality, I have a fractal dimension which is 2 over kappa. So I have two fractal entities connected together by this duality. And let's put a prime here. And you can see that fd minus 1 times fd prime minus 1 is 1 quarter. This relationship existed before SLE was invented. So in fact, it's an achievement for SLE that this is correct. Fine. Now I have spent a lot of energy creating a new mathematical framework. You may ask, what is the use of it? I hope you don't mean it in a very technological manner, but in the physics manner. There are things which I can do which I could not do before. For example, I can calculate the probability of passage to the left. The probability of passage to the left is I have a random path coming out here at origin, which I can, of course, always choose it to start at origin. And then I have a random point, z. What is the probability that it will go this way and not that way? This is called the probability of passage to the left. And obviously, it depends on this angle. If theta is pi by 2, that is z is right above the origin on the imaginary axis, the probability of either side will be 1 half. So this will give me an interesting result only if theta is not equal to pi by 2. And then it depends on what sort of path I have. So it must depend on the value of copper. So this is the question of probability of passage to the left, which I can then calculate with the machinery which I introduced. And the calculation is relatively simple. What you observe is that whatever is the probability, it will not change under conformal transformation. So if I make a conformal transformation from here to there, I should get the same answer. And it's a little bit of mathematics. But I eventually end up with this result. So the value of copper comes in to these points. And there is a cotangent of theta, which means that if theta is pi by 2, it is just 1 half. And the other thing which I forgot to tell you is that these boundary conditions. So if theta is equal to 0 or theta is equal to pi, that means if this point is here or it is completely open, here I have trivial answers. One is that if it is completely closed, the probability of passage to the left is 1. It's completely open. It is 0, which are these two boundary conditions. And observing these boundary conditions, this relationship can be derived. The other question is winding angle, which I can do for a random path. That is, at any point, take the tangent to the path, take another point, draw a line. This is an angle which continually changes as the points change on the path. Question is, what is the distribution of this angle? The mean is definitely 0. It has no choice. But the variance clearly increases with time as time goes along this path. But it must also increase with kappa. Because as kappa increases, you get a path which is more spread in the plane. It's like random walk. But since these guys are not random walks, kappa is not a constant to be scaled away. The next question will be, how do you connect time to physical dimension? It is an involved discussion. But it actually is connected to the logarithm of L. So if I have an ensemble of paths, I can always calculate this variation of theta for it. And that gives me the winding number. The winding angle. And the other thing which I talked about already is the crossing probability. So you have four points on the boundary of a domain. And you ask, what is the probability that my configuration enters one side and exits another? And I argued with you that this would have to do with the boundary operator, the boundary operators which I introduced at these four points and their expectation. Now what I can do when I come to SLE is that I can always find a mapping which will take the domain into the upper half plane. So those four points will be four points on the upper half plane. So the question will then be, what is the probability that the path starts somewhere in the arc that it is supposed to? And it goes up and meets the x-axis again in between the two points that I specified. So essentially, this calculation is only clean and doable for percolation. And it is related to Cardi's percolation crossing and exactly reproduces that. And it's this function which the other day, I was writing it from memory. So this is the right one now. Yes, yes. You can transform any two domains into each other so long as they don't form the entire complex plane. So in fact, I haven't changed it into a line. I've changed it to this. So the four points I managed to bring on the x-axis and the rest of the domain goes on the long part of the contour. So I now have the problem of it starting here and going out there. OK, the answer which I gave you in terms of conformal field theory can also be done in terms of differential equations for fractal dimension. So I ask a simple question. I have a path which goes up into infinity and I have a small disk of radius epsilon. What is the probability that this path will hit the boundary of that circle? If this is a real path, I mean a one-dimensional path, this probability will quickly go to zero as I shrink epsilon. But if it is a fractal, it has some thickness. So the probability goes to zero but at a much smaller rate. Hence, I use SLE technology here. I take a circle and I take a path going up to infinity and I take a point here and I do a G map or here F map, which takes this point to the origin again. As a result, this circle is conformally mapped. The path is conformally mapped. But this leading behavior must not change. I don't have to do it. I do this because in this way I get differential equations that I can solve because I have a differential equation for my SLE path. When I do this transformation, I can do all the calculations within the differential equations of SLE. So the probability of this guy hitting that because it is conformal will take some form like that. And I look at how it transforms. I have taken out the mathematics because I do them on today to be very difficult. But essentially, this is the sort of mathematics you go through and you get a differential equation for P to satisfy. And all I need now is for P to have that leading behavior in epsilon. This leading behavior in epsilon. And the F comes out to be 1 plus kappa over 8. You have to do the equations to be convinced. But it's exactly what I had before. And what I had here was from inserting a boundary operator, a single boundary operator on the real axis. And these two things definitely have to come out to be the same. But in this way, in fact, this was the aim of this entire lecture. Right from the beginning, I wanted the statistical mechanics of critical curves. These paths are my critical curves. And I now have I can extract some certain properties for them, which is the fractal dimension, passage to the left, winding, angle, et cetera. So here we have the fractal dimensions again. And you see that these numbers all satisfy that relation kappa over 8. I already told you about the duality. Now, let's look at the ON model. If you don't know the ON model, it is defined in the following way. I take a Hamiltonian equal to, again, like the Ising model, it is only on neighboring sites. I have a lattice. And at each point of lattice, now I have a vector instead of just this big. So I have vector si. And I have another vector sj. And Hamiltonian minimizes if they are in the same direction. And this is called an ON model because these vectors satisfy this relationship. So these are n component vectors. But I normalize them for the vector to also have the length of n. And this is called the ON model. And you can see that for n equal to 1, this relationship just becomes s squared equal to 1. And therefore, you get the Ising model back. Because I want to play with the norm. I want to play with the norm. I don't want it to be fixed. For example, I want to be able to set n equal to 0. Of course, it doesn't make sense to put a vector of dimension 0. But what I do is that I calculate the partition function. I get some result. In the result, I put n equal to 0. If there is only one component, there is no freedom. And for the n equal to 2, you pick up the xy model. And it's a very interesting model because it connects to a lot of statistical physics models. And you can show that, in fact, n, this n here, connects with the SLE with this formula, very unusual formula. So n equal to 1, which gives you Ising, it corresponds to kappa equal to 3. So for the ON model now, I have these kappas corresponding together. And then this means that you can get the sandpile model. I hope Deepak is not around. You can get the sandpile model with n minus 2. You get self-avoiding walks with n equal to 0. As I told you, I want to set n equal to 0. So setting n equal to 0 or minus 2 does not have a geometrical meaning. It means that the partition function is a function of this n. And I can set n any number I like. And these other models, percolation comes out for minus 1 and so on. So the ON model directly connects to the SLE. Another famous model that connects is the Q-State POTS model. Q-State POTS model is a generalization of Ising. Now, Ising was a case where spins here could take only two values, but now allow spins to take q different values. Obviously, q equals to 2 will be Ising. But I have to change the Hamiltonian so that only when two spins are the same, I get 1, which is easy. No, it is no more than delta S i, S j on the neighbors. So only when they are equal, this delta function will give me 1. And then Hamiltonian is minus j, otherwise it's 0. So the energy is a little bit higher. And Q-State POTS model, also through a similar relationship, is connected to the SLE. And you can see that it gives me values of kappa, which are given here. So some very strange value of kappa for Q-State POTS model. This was an interesting result when it came out. Here, I make a reference to Gaussian free field level sets, which we'll see in a few minutes what it is. Now, what I would like to do is to be able to discretize the SLE because I can then do numerical calculations with it. So how do I discretize the path? I take an SLE path, which may go from Z0 to Z1 to Z2 to Z3. And I approximate each move by a SLE path, a SLE map. Remember, this was a solution of Lovner's equation. So if the time which takes from here to go there is very small, this will be a good approximation for going from here to there. For going from here to there, because the sideways map is very difficult. I keep that. I still say, I ignore this jump and still approximate it by this slit map, and so on. So in this way, I have a discretization for the SLE path. So you give me any SLE path. I put it on the computer, and the computer will produce a sequence of numbers instead of the path. And with that, I can do computer calculations. First thing, yes? Say again, please. Yes, because I want to do an inverse mapping. What I like to do is to, if I understand your question correctly, if this is not the answer, you have to ask your question again. What I want to be able to do is that you give me a bunch of paths. I produce this in my experiment or in my simulation. Are they SLE or not? This is what I want to do. It's not that, of course, you are right. SLE paths are generated by computer. So if you generate them, you already have the discretization. But suppose they are not generated by SLE formalism. Which we will see in a second, that there are a lot of interesting paths which are not generated by the computer. They come up in practice. So I have to then take the path and discretize it. But the discretization, which you would usually do, so if you are a computational physicist, you take a path and you just discretize it by setting finite numbers to it. Your question could have been this as well. See, you are given a path. And if I'm a computational physicist, I just make this into a lattice and read the points which are here, which you would do in discrete mathematics. This is not good enough for me because this is just a normal path. I have to have SLE transformations which have produced this. So I have to be able to assume that there was an SLE map which ended up at this point and this point at that point, which is why I did that discretization. I actually did piecewise constant approximation but by a slit map, not by just a constant amount. What this does is that when you do this discretization is that it produces this set of xzi which had to be used to get these points. And this is the set of my driving functions. So I have in this way formulated a discretization for the driving function. Now the question is is it a normal distribution now because this has to be now a Brownian motion. So it has to give mean 0 and it has to give a linear autocarrelation and the 0 correlation if I shift time a little bit, which it does. And the slope of this path is 3.1 which means that this is a path happening in the critical Ising model because kappa equals to 3 gives me the Ising model. Now reality is that where did we get these paths from which this result came out? They were level sets of tungsten oxide. So physically existing paths fitted this mathematics. So to be able to, so here is the problem. You come and tell me, OK, I have this bunch of random paths. Are they SLE or not? I take this ensemble of paths. I discretize them. I check if the driving function has normal distribution. Then I check if the left passage probability holds because I can calculate the left passage probability given kappa and this should fit it. Then I check the winding angle. Then again, that should fit it. I have three checks that it is an SLE. And all of them, they should be consistent with the same kappa. Sometimes you get to do more checks but it's only when you have self avoiding walk or percolation where these other checks can be done. But this is numerical work. In numerical work, necessary conditions are safer than sufficient. Yes, you miss some things, yes. You are right, yes. The first check is this. You see that the autocorrelation has a slope of kappa. It has normal variation around zero and it has zero correlation over positive time. So I had for my talk here, I had a number of numerical results to show you. I'm just showing some of it to you to save time. You can, in this way, in fact, analyze any ensemble of paths. Take any ensemble of paths. Subject it to this calculation and then check if it is an SLE or not. OK, sometimes these questions occur in my mind. So you think, so what? So what is that you then have a phenomenon? If this is a physical phenomenon, then you have a phenomenon which is a scaling variant. You can claim your phenomenon is a scaling variant. Not all phenomena are, of course. But since I work in critical phenomena, most phenomena are a scaling variant. But then there are phenomena which are, in fact, not thermal critical phenomena, such as the BTW, the sand pile model. This analysis shows that that is also a scaling variant and connected to a conformal field theory, although it is essentially not an equilibrium phenomenon. And this calculation also showed that the tungsten oxide, this was a physically grown sample and digitized using AFM, is a scaling variant. Now, I need a lot of ensems of paths. Where do I find them? By looking at level sets. So you take any height function. In this case, it's, I think, somewhere in Italy. It's set in Wikipedia. It's some lake in Italy. Maybe you can recognize it. So you take a height function. You pass a plane through it. It will cut the height function at a certain height. And therefore, it produces these lines. These are lines which are produced by looking at a certain height. The height is written on them. Geometers have been long using these pictures to describe a height profile, a physical profile. And then what I have here, if you look at it, is a collection of loops. And I can ask what is the distribution function of these loops, loops of length S and radius R. So you form something like that, which tells you what is the probability of seeing a path of length S with radius R. This one over delta is necessary here because it gives you the step by which you change your height as you cut the height. So with this distribution, then, I can look at the fractal properties. And generally, if it is a scale invariant, it will have just a height, just a distribution like any scaling phenomenon. So S scales like R to the power of df, which means that it's a fractal with radius R. And you have a leading power in S. So you find these two exponents explaining everything about that distribution if it is a scale invariant. So what I mean by a fractal loop is something like the kach loop. The kach fractal can also be done on a circle. And then what it produces is a loop, which has a definite radius R, but very long perimeter. The perimeter of this object, it looks like a circle is not a circle. It is a fractal. And these are Gaussian free field level sets. You generate a height with Gaussian distribution. I haven't given it here. So I want to generate hxy random number. So on each point on the lattice, I need an hij on the lattice. And I choose this with a normal distribution. With some variance. That gives me what is called a Gaussian free field, but not exactly because I actually have to do this. So there is some control on how much the gradient can change. You can construct a height sample like that. And then you can ask your computer code to take a section out of it. This is what you get. These are really loops, but hard to see here. They're actually loops. And now I can ask. These are Gaussian free field level sets. And then I can ask if I take a segment of it, say this part, or even longer this part. Is it an SLE or not? I will do my tests. And the answer is yes, it is SLE. Of course, I know it is an SLE because my mathematician brothers have already proved that this is SLE. Schramann Sheffield proved that Gaussian free field is SLE4. So it was already foregone conclusion, but we do it numerically. Also, it comes out. So Gaussian free field is a surface which is rough, produced like that by a computer simulation. It has logarithmic roughness. So its first index is 0. And it has a loop correlation exponent of 1 half. And the fractal dimension of 2 plus kappa divided by it. This fractal dimension is the fractal dimension of the surface, not the lines. So the SLE fractal dimensions are 3 half. And these are all the exponents of the Gaussian free field set, which can be calculated numerically. And these are also, of course, correct results, exact results. And I can also connect the mass of a cluster to its radius by some other fractal dimension, which we haven't discussed so far, and it has this relation. So lots of things can be done in this way. Now, the problem which is close to heart is what is the KPZ surface? First is question is what is of KPZ? What is the KPZ equation? So Carder-Parisi-Zhang equation is this equation. So Carder-Parisi-Zhang is an equation which gives the growth of a height in terms of its second differential and the first differential is squared. So this operator on the right hand side is not linear. Hence, it was interesting when it started now when we know that a lot of phenomena are governed by this equation. You can set it in one, two, or any dimension you like. It's exactly solvable in one dimension. But in two dimension, it has not been solved yet. But numerically, a lot of work has been done on it. But there is no answer to that. There is no analytic solution. So this produces a height profile, which I call a KPZ surface. These terms have physical meanings, yes. I have to tell you a little surface growth. You let the surface grow by letting particles come down from infinity and they come in and stick to a surface and this surface grows. First term says that you have relaxation so that if you have valleys like that, there is a tendency to fill the valley. So the surface likes to become smoother, this surface. This first term says. The second term says that you have a sticking probability of the side. So if a particle comes here, it will stick here, it won't go to the bottom. So you have a perpendicular growth and smoothing. In the end, the surface does not become completely smooth. It is still rough but has this property. It's this property which is related to the second order. It's a nonlinear term which makes KPZ surface very different from other surfaces. So you let the surface generate on the computer and this is white noise. You let this generate and then you take level sets. You cut it like the contours. No, there isn't. You can see that the HDT is always positive. Whereas in Edward Wilkinson, it is not. It's concept. So you take level cuts. It gives you these level sets. And then you take the boundaries of these level sets, which would be, for example, a line here and you ask if they are SLE. In this graph, the black areas have negative height. We have only colored different height colors with colors positive heights. So all the black is negative but they could still be negative heights. They are negative in the sense that I take a slice and some heights will fall under it. So first question is first. What is the roughness of this thing? A surface has some exponents to explain its roughness. The rough surface has a roughness which is defined in this sense, in this way. I call that the roughness of a surface. This shows you how much spread you have around the mean. Then roughness will grow like T to the power beta. T is time. At the beginning, it will grow like T to the beta. Then it saturates at L to the power alpha. L is the size of the sample. Obviously, if the sample is infinitely large, it will always be in the beta. It is always growing. It's only interesting if you put an end to it, make the sample finite size, and then it will scale with the finite size. And in this graph, we have calculated these exponents. So in these surfaces, we grew on the computer, not in the lab. Alpha is 0.37. Beta is 0.23. And it has been repeated for samples of different sizes. And of course, the green sample, which is the largest one, is the best fit. For alpha, I just have four points because I just took four sizes. Because of that equation, you cannot get more points. So these are the numbers we computed. And note that z is the ratio of alpha and beta. Note that z plus alpha is 1.97. Analytic result is 2. So this should be 2. And being 3% off, I consider the success. Now I ask if it is SLE. Yes? No, height, height, height. Yes, at this stage, I'm only concerned with what everybody does. Everybody calculates the roughness of the surface. So alpha, beta, z are roughnesses of the surface. All that I wanted to show was that this surface is really KPC. Now I'm doing SLE. So I am now calculating things about level sets. Yeah? So these are level sets. The edges of it, I guess they may be SLE. And I run these numbers for it. And you see that theta squared is the variance of the winding angle. It fits quite well with kappa 8 thirds. And it is a normal distribution. The correlation over time is 0. And it grows with time as we expected. Or in fact, with log L as we expected. So from this analysis, one would say, OK, I have an SLE of kappa 8 thirds. There are other models in the same universality classes KPC. And you can repeat this calculation for them. And you find that they also fit kappa equals to 8 thirds, although they have different intercepts. That doesn't matter because we don't have an invariant definition of the intercept. So the claim is that the KPC surface level sets have these parameters. Fractal shell dimension of them is 4 thirds. Kappa is 8 thirds. XL, just like Gaussian free field, is in fact 0.5. This is important because there is a conjecture that XL is a super universal exponent and unproven conjecture. So here is a chance for you to prove it that XL is 1 half. These other ones, alpha equal to 0.37. And so on was given before. These other constants have to do with level sets which I didn't teach. So we don't need to worry about that. So yes, super universal means that from young universality class to another university class, it doesn't change. Two, three slides before. OK, let's see here. This one? Yes, from this you can see that if I take the mean of dhdt, it's positive. So it's always growing. So since it is growing, the height at which I cut must also grow with it. And the question then becomes important that which height I cut. That's all. I had a slide explaining what was the solution to this that you have to cut above a certain amount to avoid. Because you see, if you don't cut a height at the right place there, you cut it too much near the valleys, you will just see very scattered holes. And if you cut it very high up also. So you have to cut it at the mean in some sense. But this here, the mean is moving, so you should be careful where you cut it. Yes, Edward Wilkinson does not have a moving mean. So dhdt is zero. And you can easily find the surface to cut it. So here is my pet project, but unproven. That the XL is 1 half, alpha is 1 third, Z is 5 thirds, and beta equal to 1 half, 1 fifth. Consistent with these results, but unproven. One day, eventually I will do it. OK. What we just did was that to generate a lot of random paths, I created loops by cutting the heights. But also I observed that in the OM model, in the Ising model, in the QStatePots model, loops play an important role. So it would be nice if I could have a direct route to the loops and not do what I just did. What I just did was that you gave me a lot of loops, and I said, OK, I cut this part, and I proved that this is SLE. I can do that, but I'm wasting some information. Because the whole loop and its distribution must have a meaning. So I need some statistical physics of loops. Let's take the Ising model. The high temperature expansion of the Ising model is like that. If you haven't seen it, I can reproduce it for you after the lecture, but it takes about five minutes. It's not very hard. But for the moment, believe me that I can expand the Ising model like that with some variable x, which depends on temperature, to the power of the length of the loop and the sum over all possible loops. Or this can be rewritten in this way on the square lattice. Take all closed polygons of length L, and that then becomes x to the power of the perimeter of the polygon. And WL will be the number of ways you can draw a polygon P of length L on the square lattice. Ising model, in this sense, is defined on a square lattice. And of course, you can define it on any lattice. So now I take this, in fact, any closed polygon. I call that a loop. It's totally geometric. No spins involved. I just count them, and it tells me about the thermodynamics of the Ising. Totally combinatoric problem. In fact, it's a combinatoric problem which mathematicians are interested in. There is a combinatoric problem. How many ways can you draw a given polygon of length L on the square lattice? Unsolved problem. It is unsolved, although you see a solution here, although you see a solution, but this is a physicist's solution. If you ask me, I can only give you the leading behavior of WL, not its total form. And in fact, it has a very interesting total form. Well, it doesn't have a known one, but a phenomenological one, which people have fitted to the data, which diverges at the critical point. So this is an alternative way to look at the Ising model in terms of loops. So I know that this model is conformally invariant. I know that also it is related to SLE3. So I have here a loop ensemble, which is related to the Ising model. And this loop ensemble therefore has to be conformal. This is, as I said, that this is a combinatorically important problem. In this paper, they have enumerated polygons of up to length 90. Completely unbelievable numbers arrive. And this has something about Ising model is hidden here in these numbers. So the future of this work, which we have been doing, is that you have to develop tools to analyze 2D-scale invariant loop ensembles. Because they are just happening too many times. And it is clear that these are important. And we don't have good tools. This mathematical entity, which is conformal loop ensemble, is a possible tool. What it does is that it says that it just applies the same notion of conformal invariance. That is to say, I have an ensemble of loops in a domain. And I map it conformally to another domain. So these loops will go to other loops. The number of loops will not change. But their shapes will change. So it is conformally invariant if the probability distribution of loops here is the same as probability distribution of loops here. I was hoping to show you a calculation using conformal loop ensembles. But it was at the very end of my talk. And it got deleted. Yes? No, I have. OK, let's start here. I have a physics problem here, which can be reformulated in terms of loops. This is not an approximation. This is an exact reformulation. So what do I have here? I have just loops. You see, the question was very relevant. What is Isink got to do with loops? It has got to do with loops. These are real geometrical objects. So here is a claim that you can describe all thermal properties of a model such as Isink using geometrical objects. No. This is a high-temperature expansion which is valid all the way through. It's not. Yes. So you have a critical temperature as well. But I can reformulate the OM model as well in terms of loops. Say again. Say again. No, yes. It goes down to the critical temperature. It goes from above to the critical temperature. But near the critical temperature, it should give me all the exponents. So I should be able to get all the exponents in terms of geometrical objects. So you can study this problem totally from a mathematician's point of view, which is how many times I can fit a polygon on the lattice. And from the numbers he gets, I should be able to answer questions about the Isink model. I just don't know how. There are things I can do. But I do them case by case. I don't have a methodology to deal with all loop models. When I say I, I mean physicists, not my person. So it sort of impresses on you that this is a geometrical problem. So what I am hoping to be able to do in future time is develop a tool to deal with conformal loop ensems. So these theories near the critical temperature are definitely conformal, scale-free, and so on. So these tools must exist. OK, let's stop here. Thank you.