 Integrability number theory. Tomorrow afternoon, starting from 3. Who want to be here? OK, so we are starting our afternoon session. And the first talk will be by Sivak Murcian. And he will speak about the point processes that turn in points of large lozenge tiling. Thank you very much. Thanks to the organizers for the opportunity to be present at this wonderful workshop. I'm going to talk about point processes that arise in the lozenge tiling model. Today I'm going to introduce the model, but I'll introduce it again. So what is the model? Well, consider the triangular lattice, a portion of the triangular lattice. If you take two neighboring triangles and connect them with each other, you get a rhombus. And there are three different orientations that you can get drawn by in three different orientations. And these three rhombus are called lozenges. Now, what you can do is you can take some region of the triangular lattice that can be tiled with this rhombi, with these lozenges. And the lozenge tiling model says we'll take a region. There are many different tilings of this region by lozenges. Pick a random one, say uniformly random one, and understand properties of this as the size of the system goes to infinity. Now, the same model, there are many different ways that this model can be described. So it can be written in terms of the diamond model. It can be written in terms of nonintersecting lattice pads. There is one that will be relevant for us. This is called the plane partitions, or skew plane partitions. The way this works is you take, say, a rectangular region. You cut out a staircase shape from the corner, a young diagram from the corner. And in the rest, you fill integers, non-negative integers, in such a way that they are weakly decreasing in the southeast and southwest directions. Now, this is an analog. This is kind of a two-dimensional analog of partitions. Now, what you do is you kind of draw a height diagram associated to this. So if you see the number three in a cell on top of that number, you stack three cubes. If you see the number four, you stack four cubes and so on. And you get something like this. So if the numbers were all zero, I would get this picture. There would be no cubes, just an empty room with a weird wall at the end. And with these numbers, I get a picture like this. So like the first column, the first cell has three boxes, the next one has four, and so on. And when you look at this, you can see that if you think about this as a two-dimensional picture what you're seeing is a tiling of a region by lozenges. And because I didn't specify a maximum for the numbers that go in here, if you think about this as a tiling problem, then what you're doing is you're taking this infinite region and you're tiling this. Except what does this correspond to, this cut out thing? Or what that means is that basically way up high at infinity, you have some boundary conditions, which are saying that up at infinity, you are going to have, if you go high up at infinity, you are going to have this tiling and then this and so on. So the boundary condition specified by this ends up being a condition at infinity. So this is the model. Now what kind of results are there? So as Ken mentioned, this model exhibits this phenomenon of limit shape. So if, for example, you take a hexagonal region, large n by n by n hexagon, in this case, this is a 75 by 75, 75 hexagon, and you take a uniformly random tiling of this hexagon, then it's almost always going to look like this. In particular, I see one of my colors is showing up as white. Hopefully it's distinguishable for you. So what are you seeing? Well, if you look at the corners, even though we took a random tiling, the corners, you don't see any randomness. For example, if you look at this corner, what you are going to see is a tiling that just looks like this. Only one type of lozenge is going to appear in this corner. Only one type of lozenge is going to appear in this corner with probability 1 and so on. So the corners are deterministic. They're frozen. The randomness just stays in the middle. And then there is this curve, which looks like a circle. And as n goes to infinity, it is actually going to become a perfect circle in a script of the hexagon, in this case. And that divides the two regions. So in statistical mechanical language, you would say there are frozen regions. There's a liquid region. And there's a phase transition that's happening along this curve. And that curve is called the frozen boundary. And the first results, showing this kind of thing, were obtained by Kohn-Larsen probe, Kohn-Kenyon probe. They used a variational principle to show that there is a surface that minimizes this. It's basically, if you think about this as a tiling problem, then you have this inscribed circle outside of which you have the frozen regions. Inside you have the liquid region. But if you think about this as a pile of boxes, then it's a three-dimensional stepped surface. It's a stepped surface. And it turns out that there is a particular surface that with probability 1, every surface is within epsilon of it, or arbitrary epsilon, as n goes to infinity. So these are results. And there are many, many other results that I'm not going to mention. So Kenyon and Okunkov studied arbitrary polygonal regions. So you don't have to have a hexagon. You can just take a region that looks like this, or any polygon where the edges of the polygon are parallel to the three lattice directions. And look at tiling of this. And understand, for example, this frozen boundary called the Arctic curve that you get, or understand the nature of the randomness that you observe in various points. So this is for bounded cases. As I said, we are going to consider this plane partition model as well. So here, because the number of configurations is infinite, there is no kind of the region that your tiling is unbounded. The uniform measure doesn't make sense. So the measure that's considered here is the so-called volume measure. So you pick a parameter q, which is between 0 and 1. And you say the probability of a configuration is going to be proportional to q raised to the volume of that configuration, where the volume is just a total number of boxes. A different way to think about this measure is you associate weight q to each box. And the probability of a configuration is just going to be the product of the weights of all the boxes. And all I have done here is basically written q to the volume as a product of volume many q's. But you'll see why I'm writing it that way later. And to be precise, basically the probability of a configuration is just going to be q to the volume of that configuration divided by the normalizing constant, which is called partition function. It's called mechanic, which is the sum of all this q to the pi's over all pi's. Does the model make sense? So on to results. So now what you want to study is how this thing behaves as n goes to infinity. In this language, there are some results that go back to the 80s with Bernhard Nienhuis and co-authors. In early 2000s or like late 90s, Okungov and Roshitikin studied the following kind of boundary conditions. So you take, say, an a-n, my b-n, room, and look at the back wall that looks like this. c1, n, c2, n, c3, n, and so on. So c1, c2, c3 are constants, n goes to infinity. You're tiling a room that looks like this. You want to understand random tiling of that, with respect to the skew to the volume measure. This was later generalized to rooms that can have a piecewise linear back wall. The way to interpret, so we generalized this with co-authors, to piecewise linear curves where the back wall can have arbitrary slope in this range. So what I mean by this, for example, if you have a wall that has slope zero, what that means is that it is discretely approximated by something like this. And then the most generally, you can have arbitrary piecewise linear back walls. In this range. And so you can take a shape like this as your floor, write down your integers here, and take the room corresponding to it, and tile it randomly. And so this is an exact sample. You get something like this. And so here, you don't really see the boxes because they're too small. Each box is roughly the size of one pixel. But you can see that there is this random region, the liquid region, there are these frozen regions. And then you can see that there is a particular curve separating them. And you can show that there is this curve. It's kind of the law of large numbers to result for something like this. So all the results that I mentioned so far, we're talking about this kind of the limit shape or the limit curve. What about the fluctuations? What are the randomness in the liquid region or in the other regions? Well, to study those, you study through correlation functions. So if you look at something like this, you can notice that if you just know the positions of the horizontal laws and just the light colored ones, then that completely determines the whole picture. Basically, if you know where the stacks of cubes end, you know the whole configuration. And to understand the fluctuations, you can just look at the local correlation functions of the positions of the horizontal tiles. So if you just look at the positions of the horizontal tiles, you get a discrete point process. And just look at the correlation functions of the discrete point process, the two-dimensional point process. And Okun-Govind Reshetikin showed that this point process is actually determinantal and wrote a formula for the correlation kernel. Sorry, I pressed the wrong button. So it's a determinantal point process with this correlation kernel where all the information about the boundaries encoded in this function's fee. So this is a product over the whole boundary. And there are various papers which analyzed the point processes that you get in this model. As the size grows, the limiting point processes, for example, Okun-Govind Reshetikin showed that in the case that they studied, that in the bulk, what you get is the incomplete beta kernel. On the boundary, you get the error process. There are these costs that appear that you get the accuracy process. So Drick but yes, studied what happens when you go very high up in a region like this. You get what's called the bead process and so on. So all this, there are various papers which just give you the limiting correlation functions in these various regions. So the rest of my talk is going to concentrate on one point, which I haven't marked here. That's a point that's like this. This is a special point. It's called the turning point. And in 2006, Okun-Govind Reshetikin wrote a paper with a catchy title, The Birth of a Random Matrix, where they said, where they conjectured the following things. So if you look at the turning points, which are, they're called turning point because if you look at this point, three phases meet together. The liquid phase meets two frozen phases. Or in other words, yes, sorry, in this formula, where what boundary? Oh, the boundary is encoded here. So this is a product over, there's one multiplicand for each. So if your boundary is say, this, then these things are going to have one, two, three, four, five, six, seven multiplicands. And the multiplicand is going to have the plus sign or negative sign depending on whether this is going up or down. And so if you look at this point, then what's happening is you have a frozen region where only one tile is appearing. You have a frozen region where only one tile is appearing, but it's different from this one. And then you're turning from here to here. So they call this a turning point. And they conjectured that the process that you're going to see here is universal. And it's going to be the same as the GUI corners process. On the next slide, I'm going to explain exactly what those things are. And one thing I want to mention is that if you're starting the Q to the volume measure, to get anything interesting, you should take the limit when Q goes to 1 from below. Otherwise, if Q stays finite, then your configuration size isn't going to go. The average size of your configuration is going to stay finite, so nothing interesting is going to happen. And the other thing I want to mention is that in all the examples that we have seen, there are only frozen regions where one type of loss-inch dominates. So a frozen region means just it's a facet with one tile tiling it completely. OK, so let's look at these turning points. In other words, we're looking at what's happening at this edge. Well, let's look at the first slice. If you look at the first slice, you can tell from the geometry that there's only going to be one particle on the first slice, the red particle. If you look at the second slice, there are going to be exactly two particles on the second slice, the two green particles. And the red one is always going to be in between the two green ones. If you look at the third slice, there are three particles, the blue ones, and the green ones are going to be in between. And so on. The k slice is going to have exactly k particles, and the particles on the previous slice are going to be in between. That's called interlacing. So on the discrete side, we're going to look at the first k slices. And the look at all the particles in the first k slices, they form this kind of triangular array. On the other side, let's take a Hermitian matrix. So it turns out that if you take a Hermitian matrix and look at its eigenvalues, of course, they are real. So if you take, say, a k by k Hermitian matrix, it has k real eigenvalues. If you look at its k minus 1, k minus 1 corner, it's going to have k minus 1 eigenvalues. And those eigenvalues are going to be in between the k eigenvalues of the original matrix. So the eigenvalues are going to interlace. So what we are going to do is we're going to take a GUI matrix and look at its 1 by 1 corner, 2 by 2 corner, 3 by 3 corner, and so on. And look at all those eigenvalues. And so you'll get this one eigenvalue, this two eigenvalues, and this one is going to be in between those two and so on. So you're again going to get this triangular array. And this is the joint distribution of this is called the GUI corners process. And what is the Okun-Koroshniki conjecture? It's saying that, well, if you take the positions of the horizontal lozenges of the first CK slices, you center them around this height. You scale them appropriately. Then as n goes to infinity, they're going to converge to this GUI corners process. So in the paper where they conjectured this, they actually prove that that holds in this case, that if you have skew plane partitions in these boundary conditions, then if you look at the turning point that arises here, then indeed, if you center and scale appropriately, then it converges to the GUI corners process. But they conjectured that they should hold universally. So first, I'm going to present the result which confirms this. And then I'm going to present a regime where it seems to fail, because I've modified the measure somehow. But then we will see that it actually hasn't completely disappeared. The GUI corner process is still somehow there. And then if there is time, I'll show you a regime where we have completely killed the GUI corners process. So first, let's look at something like this. So we are interested in what's happening at this point. Well, think about it this way. Let's draw a line here. And let's condition on the positions of the horizontal lozenges along this line. If you know the positions of the horizontal lozenges on this line, this part doesn't really matter anymore. And so you can look at just this part, this region. So in other words, look at the region that's like this, condition on what's happening on this last slice, and try to understand what's happening at the turning point. So what we want this to grow, in other words, if you look at this picture, as n was growing, this is going to grow as well like n. So take something like this. Take a region like this with n, specify the positions of horizontal lozenges on this nth line. There are going to be exactly n lozenges there. And take the limit as n goes to infinity and understand what's going to happen to the first k slices. Now, when you take n goes to infinity, this is going to grow. And what you can do is you can specify an arbitrary density function, limiting density function, for the positions of the horizontal tiles. So take any density function here and say you have a sequence of discrete approximations of the density function. And take regions corresponding to those discrete approximations and take the limit as n goes to infinity. And so with Leonid Petrov a few years ago, we showed that if you take the volume measure, q to the volume measure, you take arbitrary. So f here, think of it as the density function. The way I've written here, it's not the density function, but it pretty much corresponds to it. So take an arbitrary profile and take a sequence of shapes which approximate that arbitrary profile that you have specified. So this is distance n. You're specifying an arbitrary profile here given by a function f. You're taking discrete approximations of this as n goes to infinity and you're studying the process here. And we showed that in the limit as n goes to infinity, if q goes to one from below, and this is the right scaling for q, then if you center and scale the particles near this point, then they converge to the GUE corners process. This approach of basically looking at regions like this and then conditioning on this and understanding that was first used by Vadim Gorin and Greta Panova in the case of the uniform measure where they obtained this result. And notice that the parameter q that I have is e to the negative gamma over n. And if you set gamma to zero, in particular, you get the uniform measure. And so the gamma equals zero case recovers their result. The different thing to notice is, so I said, let's cut it like this, but it actually turns out that the hexagon is a particular case of this. So we said we can specify an arbitrary density for f. Let's specify the density that looks like this. Tightly packed, and then nothing, and then tightly packed. Well, if you think about the tiling, if these two are horizontal lozenges that forces this to be horizontal, that forces this to be horizontal, that forces this to be horizontal, same here. So if you specify that all of these are horizontal, that forces this tiling here. And so if I specify the density to be one, zero, one, then essentially I'm just tiling this hexagon. But in particular, this says that if you take q to the volume tiling of the hexagon, then over here you're going to see the GUE corners process. In the uniform case, when you're just taking uniform tiling of the hexagon, this was obtained by Eric Nordenstam and Kurt Johans, and this was, of course, this preceded the result of Vadim Gorin and Greta Panova. Okay, so these are the results which tell us that we get the GUE corners process. Yes? Yes. Yeah, we need speed of convergence. We need speed of convergence. Yeah, think of this as clarifying that point. Yeah, we need the, I mean, and we didn't try very hard to get an optimal speed of convergence, but we need some speed of convergence, yes. So as I said, I'm going to look at slightly different measures. So all the results that I mentioned so far were working with either the uniform measure or to q to the volume measure, and we're thinking about different boundary conditions. Now what happens if you modify the measure? Now, as I said, q to the volume, you can think about as product of all the boxes, a weight q corresponding to that box. And up to now, all the boxes had the same weight. Now it is natural to consider a regime where the weight of the box depends on its position. So let's take weights where the weight of the box depends only on its horizontal position. So the boxes on the first slice have the same weight, say q one, the boxes on the second slice have the same weight q two and so on. So if I took weights like that, then the probability of this partition, the of this plain partition would be proportional to say q one to the zero, because there is no particle, there is no box on the first slice, there's one, there are two boxes on the second slice, so it will be q two squared, q three, there's one here, four here, five, and so on. So this is the measure we will consider. Well, we can take arbitrary q's, so what I'm going to do is I'm going to consider q's which are periodic. So it takes some period k and consider weights which are periodic with period k. So you have weights q zero, q one, q k minus one, q zero, q one, q k, and so on, periodic weights. Now before, the limit to study was when q goes to one from below. Question arises now, what limit should we consider now? What is the natural thermodynamic limit of this system? Well, you could say all these q's should go to one from below, but it turns out that if you do that, that's not that interesting because what you can do is you can pretty much replace all these q's by their geometric average, and the processes and shapes that you're going to get are going to be the same. So what's more interesting is to consider weights like this. So your weights are going to be, you still have this q, but weights are like alpha one, q, alpha two, q, alpha three, q, and so on, still periodic. Where these numbers, alpha one, alpha two, alpha three are fixed, and q goes to one from below. And alpha one, alpha two, alpha three can be arbitrary. And you can study the limit when q goes to one from below. So this is the natural thermodynamic limit to consider in this case. Of course, the product of the alphas should be one. Otherwise, so if it's less than one, then the average size of your configuration is going to be finite. You're not going to get an interesting thermodynamic limit. If the product is bigger than one, then your partition function is going to be infinite. So the natural limit to consider is when this is one. However, it turns out that if any of the alphas is bigger than one, then you still have an infinite partition function. The measure doesn't make sense. You can think about it this way. Imagine the weight given to boxes here is two. Then if I have m boxes here, then the weight will be two to the m. Just the sum of these partitions is already going to be infinite. So your partition function is going to be infinite. And you can say, okay, I'll just make sure that the alphas which are bigger than one don't fall in this corner, but it turns out that that's not good enough. As long as the product of the alphas is one, and there's at least one weight which is bigger than one, the partition function is infinite. So it looks like the measure doesn't make sense. Actually, you can make sense of this measure by picking, so in the case of two periodic weights, you can make sense of the measure by picking the weights, picking the boundary correctly, but that doesn't scale to general k periodic weights. So what you can do is you can say, okay, I want to have this k periodic weights, but I'm going to break the periodicity just in one slice, in this middle slice. Now remember that we want to understand what's happening, say, a turning point. And so you're basically thinking if I just make some small change very far away from my turning point, that shouldn't really make a difference for the processes that I can observe at this point. So there's just one modification of the weight on this middle slice shouldn't have an effect on the processes that we are interested in. So that's what we are going to do. We're going to take weights alpha, arbitrary, take the product of all the ones which are less than one, and just put that extra factor on the middle slice, other than that, it's completely periodic. Just just guarantees that the partition function is finite, so the measure makes sense. They said that shouldn't have any effect on the processes that you observe at the turning points, but it does have an effect on the process where you put there. So if you actually look at the random tiling, these are, again, this is just a very large sample where there are just small boxes here. I've changed the color scheme a little bit. And each box is just, again, of size roughly one pixel. And you can see that, so this is what would happen if you just add Q to the volume. And this is what happens when you have, say, two periodic weights, but this middle slice has this extra damping weight. What you observe is that the system now exhibits a first order phase transition along this middle slice. The limit surface is not actually differentiable along this slice. Neither is the frozen boundary. Neither is the curve over here. And that suggests that there is, so if you look at the local point process, here it's going to be different, and you can actually write down the point process here. It's, of course, still determinant, and it's determinant with a kernel that looks like this. Now, if you pick two points, t1 and t2, what you should do is you should look at this, so if you look at this bt1, t2, take your two integers, you have this sequence of one over alpha. So this is the case when the weights are periodic, by the way, alpha one over alpha, alpha one over alpha, and so on, except, so you have, in the middle slice at slice zero, your alpha is bigger than one, on the middle slice it's supposed to be alpha if it was perfectly periodic, but now we have just replaced it by one to make sure that the measure makes sense. And that's what we were observing here, and so if you just count the number of ones between t1 and t2, that's what goes in here. So if the weights were just, if this was just q to the volume, this bt1, t2 would be zero, so this would just become one minus zero to the delta t, this factor would disappear and you would just recover the incomplete beta kernel. But right, but now you have this extra term coming in. Now this is a point process in the middle slice which is not translation invariant in the horizontal direction, but of course translation invariant in the vertical direction. So that's just this effect of putting this extra weight in the middle slice. So what happens at the turning points is just to simplify, so what we're going to do is we're going to look at regions which are not just this general shape that I was considering just to take a simple floor, infinity in one direction bounded in this direction. And let's look at the following thing. So let's say we have weights alpha one here, alpha two, alpha three, and so on, alpha k. It turns out that the quantities to look at are not these alphas, but look at their products. So beta one is going to be this, beta two is going to be the product of this two, beta three is going to be the product of this three, and so on. Now if you look at those things, beta one, beta k, if you have k periodic weights, this don't have to be distinct, so let m be the number of distinct betas. So it turns out that when you do this, the system actually develops m turning points. In particular, if you take k periodic weights, then you end up having k turning points generically. It's just that if some of those betas coincide, then the number of turning points goes down. So basically what happens is you just have one turning point, now if you take four periodic weights, then over here you have four turning points generically. Now what is a turning point? You're supposed to get a frozen region here, here, here, here, and here. And the system actually develops new types of frozen regions. So the system develops frozen regions where you don't have just one type of tile, but two types of tiles in a deterministic pattern. So in this picture, at the top you have this frozen region, next you have this frozen region, next you have this frozen region, next you have this frozen region. So you have these new types of frozen regions that appear. And actually, so all of these frozen regions are just facets, they are basically vertical walls. And if you project them to the floor, what's going to happen is the first wall is going to project just to this line. The next wall is going to project to this line, they're just going to divide this angle into equal parts. So in this case, you have these four types of frozen regions, they're just going to project to these four lines, they're just vertical things. And it turns out that if you specify any rational multiple of pi over two, then you can pick weights, so pick any rational multiple of pi over two, pick any periodic pattern, I'll say two up, one down, so this is a pick a periodic pattern, say two up, one down, one up, one down, and then repeat this, this will give you some slope, it will be rational multiple of pi over two. Then it's possible to pick weights so that the system generates a frozen region, of course, that looks like that. So you have, everything is possible. Moreover, so if you're taking four periodic weights, as I said, the projections are going to be these five lines, so the first frozen region has to be like this, the second one has to be like this. The third one can either be two up, two down, or it can be up, down, up, down, and so on. So there are two different sequences of rational approximations which project to this, and by picking the weights correctly, you can get any one of those that you want. And what happens if some of these betas coincide is that some of these turning points disappear, so one of these lines can disappear, or one or few of them can disappear. Okay, so that's the new types of frozen regions that you can find. What about the point processes? Well, it turns out that because of these turning points, you cannot possibly see the corners, GUE corners process anymore. And it's easy to see if you just look at the number of particles. So for simplicity, let's look at the case of two periodic weights. So there are going to be two turning points. If I look at the first slice, there's only one particle, it has to be either here or here. On the first slice here, maybe there's one particle, here there are no particles. If I look at the second slice, there are two particles, and this has to be in between. So one of them has to be here, the other one ends up being here. So there's one particle here, one particle here. On the third slice, you have three particles, one has to be above this, one has to be below this, the third one ends up being here. So you have two particles here, one here. And if you continue this way, you see that the number, if you look at the number of particles on the slices, and this is what you see. And of course, now if you look at just this process, this is not going to be the G corners process, neither is this. And in general, what's going to happen is you have this one particle, two particles, three particles, four particles, and so on. This is just going to be divided by some curves like this. Once you're going to get near each turning point, you're going to get a process where the number of particles looks like this. A few ones, and then a few twos, and then threes, and then fours, and so on. And so when you write down the correlation functions, they look like this. Now, so how is this different from the G corners process? If you look at this NT1 and NT2, if these were just T1 and T2, that would be the G corners process. But now NT1 is actually counting the following thing. You have your weights here, beta one, beta two, beta, say M, beta one, beta two, beta M, or beta K, let me say. And so instead of, if you're on some slice, instead of counting the number of slices that you have seen, you should only count, say, the number of beta ones that you have seen. Or you should only count the number of beta twos that you have seen. So that's all that N is. You get a detrimental process which looks like this. So it's not the G corner process, but it turns out that the G corner process is still there. Basically, what you can do is you can do this. Take any slice which has just one particle. Take any slice which has two particles. Take any slice which has three particles and so on. Just make sure that you get the number of particles correctly. And look at the distribution of only those slices. Then it actually will be the G corner process. So you can basically get that from this. So essentially what you have here is a bunch of G corners processes which are somehow non-trivial correlated. So that's what we kind of, this was supposed to be a regime where it's outside of this G corners process regime, but you can see that G corners process actually survived. I'll show you one last result where actually the G corners process doesn't survive. So let's do the following thing. Let's take two periodic weights, alpha one over alpha, alpha one over alpha and so on. But let's study the regime. So if alpha is one, then this is just the Q to the volume. And we know that there is one turning point and this is the G corners process. G corners. If alpha is bigger than one, then we know that we have two turning points and these are not the G corners, although the G corners process is still somehow hidden there. What happens as alpha goes to one? You try to squeeze these two turning points together. All it turns out that it depends on the rate at which alpha goes to one. So basically what are we considering weights alpha Q, one over alpha Q. And Q goes to one, but I also want alpha to go to something. Well, Q was let's say e to the negative r. And so if I write alpha Q, then alpha Q is going to be e to the negative r plus something and one over alpha Q is going to be e to the negative r minus something. And this something, we want to go to zero. It depends on the rate. So it turns out that there is a phase transition that if this thing goes to zero like square root of r, if you take weights like this, then something different happens. If these are your weights, then in the limit the system behaves exactly like the Q to the volume system, the homogeneous system. So if you look at the frozen boundary, it's exactly the same. If you look at the point processes in the bulk, it's the same whether it's Q to the volume or it's this. So this two periodicity of the weights has completely disappeared everywhere. Or if you look at the frozen boundary, you will see the area process. Everywhere you don't see this two periodicity except the turning points. The turning points remember, you actually get a point process that looks like this. Remember the weights that I had on the other side this parameter gamma. Once you have a process that looks like this, if gamma was zero then this would just become w to the t2, c to the t1 and that would be the Ge corners process. But now this is split between these two and this is not the Ge corners process and the Ge corners process is nowhere to be seen here. You get some kind of one parameter deformation of that process. I'll stop here. Thank you.