 So it's a great pleasure to talk in this celebration of Arthur August's 70th birthday. I'm honored to be asked to talk. What I want to talk about, if I see the chalk, there's the chalk. No, that's the colored chalk. That's what we'll do. That was the water bottle. It's OK. I don't need the water bottle. So this is, first of all, the credit where credit is due. It's joint work with Jose Boulosquil, Javier Frezon, and Omid Amini. And also, it grows out of conversations I had with the physicist, in particular Pierre Van Hove and his then student, Piotr Turkin. And one other source of inspiration, in some sense, for me, the most significant, is a series over the years, really, of conversations with Professor Cato. And he has undertaken, graciously, to explain to me the program he and his collaborators have undertaken over. It's a massive program over some years to understand degenerations of hard structures. And it's a very subtle and difficult business. And everything I'm going to say is, in some sense, known either to these guys or to these guys. But our attempt is to bring these guys, by and large, don't know these guys. And so we want to bring together the various schools. And in particular, the massive and subtle program of Cato and collaborators yields many, many, many invariants associated to degenerations of hard structures. But only some of them are of interest. And so, sorry? You might want to amend that. No. You were of interest to those people. No. I think even Professor Cato, he's probably here, so he could testify. But I think he would even admit that some of them are really artifacts of the external structure you put on hard structures. And others are completely fascinating related to regulators and related to physics. So anyway, let me proceed. What are amplitudes? See, I wrote down a list of things I want to say here. Yeah, so the point is that, well, yeah, let me say something about how a mathematician attacks physics to begin with. I sort of have to set level here. There's a story I'd like to tell. When my grandson was four, he was interested in trains. And so Christmas came, and I bought him a train set. And the train set came in this massive box. And it was immediately clear that I made a big mistake. The train set was much too complicated for the kid and too subtle. And he could make no sense out of all the complicated pieces. And I thought, oh dear, Christmas is ruined. He will be miserable. But not at all. Because in fact, what happened was, even though the train set was too intricate and complicated and subtle and everything, the box was fantastic. With the wonderful pictures of trains doing all these exciting things. And so the whole day was passed in sort of fantasy play with the box. And I think there's a lesson there for mathematics. You can't. It's not possible. Physics is too hard for anyone but the dedicated professional physicist to really deal with. On the other hand, physics involves these structures, which are completely fascinating mathematically. And it's with that spirit that I want to proceed. So OK, so I want to talk about quantum field theory. And quantum field theory very typically kind of begins with really a metaphor. And this metaphor is what they call the path integral. And it's a big infinite dimensional integral that nobody really knows how to attack. And so there's been developed. One way to attack it is a so-called perturbative way. And this is based on the expansion, a sort of expansion which is again inspired by the finite dimensional case. And it's an expansion over where the index set is a certain collection of graphs. So I write gamma for a graph or for a collection of graphs. And so for each graph, there is a coefficient alpha, gamma. And then there is a variable, which is raised to the power, which is the first homology group, the rank of the first homology group of gamma. So that's sort of the basic shape. And we are interested in alpha gamma. This is the so-called amplitude. So now we're going to cheat, because in fact, we're going to write down some integral. We're going to write down a number of integrals for alpha, gamma. And by and large, none of them are going to converge. But we won't worry about that, because I won't make any assertion. I mean, there are ways of regularizing and renormalizing these integrals, but that's not our project. We just want to understand the integrals themselves. And in particular, we want to understand the integrand. So let me begin by writing down four different ways to understand this alpha gamma. See if I can get it straight. And bear in mind that I'm writing down things that don't make any sense. That is to say, that don't converge. So let's see. The first way, let me change notation here. Stick with my notes. I call this capital A. OK. So the first way would write capital A gamma as an integral. Oh, one thing here. We fix an integer capital D, which will be the dimension of spacetime. So r to the d is spacetime. And we give it the Minkowski metric. So in other words, x1 squared minus the sum from 2 up to d. So then the first expression for the amplitude associated to a graph of gamma is an integral over, I'll also write, you just general notation, if I have a graph gamma, I'll write g for the homology, the rank of the number of loops, the loop number of gamma. So then the first expression is r to d times g. That's the domain of integration. And then we take the following thing. We take the product over all the edges in gamma. So E of gamma is a set of edges of a certain propagator, which I'll label p sub e. And the p sub e's are quadrics. So bottom line is, we get a rational integral. But you see, depending upon the various values of g and d, each of these things has degree 2. So at infinity, I mean, you can see the possibility for divergence and all kinds of complicated things can happen. But at least as an integrand, that makes perfect sense. Let me take a minute to say this a slightly different way. If we want to write down the homology of the graph, we know we have a little exact sequence, let's say, with real coefficients, then we can take here the direct sum of, oh, let me take r d coefficients. And then here, we can take the direct sum of r d over e, over the edges. And then here, let me write it vertically. We can take the boundary map to the direct sum over the vertices from v of gamma. E is the edges, and v of gamma will be the vertices. And again, we have r d. But we have to put a little constraint here. We put a little 0, because we know that the boundary, this is just the topological. These are just little segments. And here, this is the boundary, the two endpoints of the segment. And we know that the resulting element here, the coefficients sum to 0. So I put a 0 here. And here, we have various projectors, which I'll just denote by e dual. So if I take an edge, little e, I can project this direct sum onto that particular factor. And then here, I can take my Minkowski metric, and that gives me a map. So I get then, for each edge, I get then a function on this vector space. And I can then restrict those functions to the fiber over a given, so now it comes in an important additional structure. I give myself inside here a point in this big vector space, which I call p. And this is the collection of external, what's called external momentum. And so I basically can rewrite this integral as an integral over the inverse image of a given external momentum. So this depends, in other words, on the choice of external momentum. Here I should make this a depends on external momentum of this ddg of x. And then, again, the product of these pe, where pe now is this function. Yeah, let's assume that gamma is connected. OK, so that's the first expression for the amplitude. But there's some others, if I shoot this really far. Then the second one, 2 a gamma, there's a factor here. I think it's n minus 1 factorial. I'll write n for the number of edges of gamma. So n will be a fixed notation. And then I'll also write sigma. We'll play an important role, is a simplex. So it's the set of all, so it will be contained in p. I'll take a projective space of dimension n minus 1, which I'll think of as having homogeneous coordinates labeled by the edges. So they're n edges. And so the corresponding projective space is dimension n minus 1. And sigma will simply be the locus for all the Te's, non-negative. Of course, one of them, at least, has to be non-zero because it's a projective point. OK. Is this where the real gross monium comes in? The real gross monium, you thank your lucky stars, doesn't come in. But if it were to come in, it would come in here. Yes. Yes. Anyway, the second expression then becomes an integral over r dg cross sigma, a sort of product chain. And here we have d dgx. And then we have omega. So I'll write omega for the standard form of integration on projective space. It's not really a form on projective space. It's sort of because the homogeneity doesn't work. So it's a sum over plus or minus. I'll have Te, and then I'll have d Te1, which d Te leave out, which is the standard form. So I put this omega here, and then to make the homogeneity work, I take downstairs. I take the sum. I take the universal quadric. So those PEs were quadrics. So I take the sum, indexed or labeled or multiplied by the homogeneous coordinates, Pe. And I raise it to the appropriate power, which is just n. Now the passage between these various integrals is done by sort of standard tricks. And these standard tricks, depending upon the graph, are probably completely illegal, because they involve exchanging orders of integration in divergent situations. So you have to be very careful about that. But just again, to just see the shape of the integrand, we're not going to worry. So the third guy involves some extra data, which we'll need to work with. And so it has the following shape. A gamma, again, will be a certain constant, which I've written down here, but I do not guarantee that I've got it right, over n minus 1 factorial. And now we just have an integral over sigma. And here come two polynomials, the so-called first and second semantic polynomials. So the first semantic I call psi. It gets raised to a power, which, if I've got it right, is n minus g plus 1 times d over 2, again, times this form, this integration form omega, divided by the so-called second semantic polynomial. So I call that phi gamma. And that raised to the power n minus gd over 2. And that's it. So here psi is the first semantic, is the second. And I have to tell you what those things are, but let me postpone that for a minute. Notice that 2 and 3 really live an algebraic geometry as comfortable with these, because they're rational forms and we're integrating over certain chains. And so if the answer makes any sense, it should be a period, the kind of thing that one is used to dealing with. The fourth expression is something that a physicist is comfortable with. It's a sort of a toy. Well, let me write it down. You'll see what I mean. It's 1 over again. There's a constant, which I don't guarantee, but I seem to have written 4 pi squared i to the whole thing into the jth power. And now I have an integral, but now I take sigma twiddle. So I should say I told you what sigma was. Sigma twiddle is sort of the affine version. So it's just a product over r greater than or equal to 0 indexed by the edges. So it's the cone over sigma. So sigma twiddle is the cone over sigma. And so this is going to be an affine integral. And this is not algebra geometric. So here we come with the exponential of these same fellows. Now there are no exponents. It is the second semantic divided by the first semantic as a term in the exponential. And then I just take, as my form of integration, I just take d Te over all the edges. And I have to divide by the first semantic psi gamma to the d over 2. So the most interesting case is when d is 4 and then d over 2 is 2. So this is sort of a toy path integral itself. You see, because what is this sigma twiddle? I have my graph gamma, so I use stupid graph gamma. And so sigma twiddle is the space, or I can think of it as the space of matrix on gamma. It's just assigning a non-negative number to each edge with the possibility of degenerating to 0. So this then becomes an integral over a space of metrics. Now one of the typical versions of path integrals that occur in quantum field theory is integration where the domain of integration is the space of paths, but not on a graph, but rather on an interesting Riemannian manifold. So here, there's kind of a toy version of such a thing. But OK. And so we want to do algebraic geometry. So of course, we want to forget that guy and work with one of the others, either one, two, or three. Actually, no. In fact, the algebraic geometry we want to do involves this guy. So let me go on. Now I have to tell you what these polynomials are. It's kind of easy to say, and I'm short on time, so let me say it quickly. I think of it in terms of configurations. So I have some vector space, h, some finite dimensional vector space, which I think of as given as being inside a, it really doesn't matter. Everything is algebra, so it doesn't matter. I just take a field k. And I put it inside some vector space with a given basis. And when I do that, then for each edge, I can project off onto the corresponding edge coordinate. And I write e dual also for the composition here. So e dual then becomes just a linear functional on h. And so e dual squared becomes a rank 1 quadratic form. And so it makes sense to look, I can think of it if you like. I can think of e dual squared as a map if I want to do it canonically from h to h dual. And I can look at the sum T e times e dual squared. And I can cheat a little bit. I mean, there's a choice. If I look at the determinant of this expression, it's not quite well-defined because I have to fix a basis. But changing the basis doesn't change. I've put in these variables. And I really care about this thing as a polynomial in these variables. So I will call this thing psi of h. This is the first semantic. And it's well-defined up to a scale, up to a scale. And of course, the particular situation we're interested in is where h, I take h equals h1 of the graph, which sits inside, well, with k coefficients. And it sits inside k to e. And so then I get this then yields psi, what I call psi gamma, which is a homogeneous polynomial in the edge variables. Now, the second semantic is slightly trickier. But not much. If I look at the h, well, let me do it in general. I have h contained in k e, some labeled vector space. And let me write w for the quotient. And let me fix a section called tau here. So then if I have h, and if I take for all w and w, I can look at the vector space, not the vector space h, but the vector space h plus tau of little w, the line I add on, the line spanned by that guy. And I can look at the polynomial I just constructed. I guess this should be a subscript. So let me write this way. Let me write h sub w equals h plus the line spanned by tau w. And let me write phi of t e and little w will be by definition psi of h sub w. So it's a polynomial then, which is of not of degree g. So remember, in the graph situation, h is a vector space of dimension g. But I've added on one line, so it actually has degree g plus 1. So put it on the next board. So the second semantic, phi, what I want to say, phi, which depends on the t e, but it also depends on, I should have called this something else. But in the case of we're interested in, it depends on the external momentum. So the quotient w here in our situation is the space of external momentum. And so this is homogeneous degree g plus 1 in the t e and of degree 2 in the external momentum. Now, there's a tricky point. I mean, it's not that tricky. There's a point here. For us, we want things to be in Rd. We want to be in spacetime. What I've described here is a kind of a linear. I mean, the p here is not in spacetime anymore. So what we have to do is we have to couple psi or phi to Rd with the Minkowski metric. And basically, I'm not going to go through that in detail. Let me just say a word about how that works. If I have a matrix that looks like this, m, and here I have a w transpose, and here I have a w, and here I have an s. So this is g, and this is g plus 1. And similarly, here is g, and here is g plus 1. So the w's are then row and column vectors, and s is just a scalar, just a one by one. So this will be symmetric. Yeah, sorry. So m symmetric. Then there is a classical formula for the determinant. So if I call the whole matrix, let's say, call it b, then the determinant of b is something like this. And depending on the parity of the day I do the computation, there either is or is not a minus sign here. w transpose, I take the adjoint matrix of m, and I take s times the determinant. Or if I can write this differently, I can write this as determinant of b divided by determinant of m is equal to minus w transpose, and here I put m inverse because I know that adjoint matrix divided by the determinant. Is it not quadratic in w? Sorry? Yeah. Sorry, I left out. Sorry, sorry, sorry. There's a w. There, sorry, yeah. w transpose m inverse w, what am I trying to say? Yeah, plus s, something like that. Now notice, you see what I can do, what I want to do is I want to couple w to this spacetime. So I have to then reinterpret this thing wherever I see. So this is going to be, as Kristoff points out, there's going to be quadratic in the entries of w. So wherever I see a quadratic expression in w, I replace it by the Minkowski quadratic form on those two variables. So from this point of view, it's kind of easy to see how to couple, so you want to couple the w to rd with the Minkowski metric. And in that way, using that technology, you can get your second semantic, t, e, and p, to work with p in, well, it's in rd, rd indexed by the vertices, comes here. And it's quadratic, so this thing is quadratic in p and of degree g plus 1 and t. So these are the two configuration polynomials, which are classic and play a central role in the whole game. OK, so now the situation is that this, I started out there with a sort of generating series indexed by graphs. And this generating series comes from this sort of metaphorical object, which is the path integral. But that whole process is extremely unconvincing to mathematicians. It literally makes no sense at all. And so in my abstract, I talked about a sea of physics. So that's the sea of physics. And the question is how to get across that sea without indulging in sort of fantasies that are difficult for a mathematician to understand. And I don't know the answer, but there is a surprising game that can be played. And so I want to explain that. So this is the basic setup. Now I want to move to the geometry. And the idea is going to be that our graph gamma, so we start with our graph gamma, but we interpret gamma as being the dual graph, a stable rational curve. I've got to be a little careful. I need to assume maybe that the vertices are all at least three edges. So there's a small constraint on gamma to get stable. But that's, in fact, not a big deal. In fact, you can move beyond that. So let me remind you, this is, again, a familiar game, but let me remind you how to play the game. For each vertex in v gamma, we associate a Riemann sphere. So we take p1. And if we have an edge and the boundary of the edge is, let's say, v and w, then we glue p1, v, and p1, w at a point. Now we have to be careful. I'm not claiming that there's a unique. There's a moduli here. If a given vertex has four or more edges attached to it, then we will have four or more attachments to the corresponding p1. And so there will be moduli. So I don't claim that this is unique, but just do it somehow. And so this gluing, it gets us, so this process, yields a curve, this stable rational curve, which I call senor. So this yields c. Is everyone familiar with that game? So it's stable, if normally, if each vertex at least involves it. Yeah, yeah. If it doesn't have to, I mean, but this is not a real difficult issue. We can deal with semi-stable situations as well. So we can then look at the reversal, we want to look at the reversal deformation of this c0. And as I say, c0 itself can have moduli. So the picture that we get is something like this. Let's see, can everyone? I know there's a shadow effect. So the picture looks something like this. And I want to think in the analytic category. So I'm drawing it because I want to do topology. So I need some, an algebraic geometry would tend to do formal things, but I want to work analytically. So we have a family, then, of curves over a space s. And s contains a sub-variety, a closed sub-variety, which I'll call t. And then I can pull back ct. So we have a thing like this. So let's see, s is, I don't know, open, essentially, c to the 3g minus 3. And eventually, I'm going to add more parameters. So I'm drawing an s plus some more parameters. But for the moment, you can just think of it as 3g. The more parameters, we're going to need to deal with marked curves. So this is going to be a family, family. And g is the genus of the graph. g is always the genus of the graph, which is the same as the genus of the curve. I should have said that, in fact, c0, so the dimension of h1 of c0, o, c0 is g. And that's also the dimension. So what do I want to say? Yeah, so this is the so-called universal deformation. And the t, there will be divisors. t will be an intersection of divisors. dE is g, which is the same. Sorry. Yeah. Well, c0 as well. Capital N, you write the number. Sorry? What is capital N? In capital N, I'm going to tell you in a while. This capital N, you mean? Yeah. That's going to account for we're going to have to put punctures, or not punctures. We're going to have to look at marked curves. So these will be extra parameters for markings. But for the moment, we're not doing that. We're just understanding the geometry here. So the point is that t is itself an intersection of divisors. So these are divisors corresponding, so we can think of s as its diversal deformation. So if we fix a crossing point, and remember, there's one crossing point for each edge, if we fix a crossing point, and we look at deformations of the curve where that crossing point stays crossing, so to speak. Other crossing points can open up, but that one stays. That defines a divisor. So dE parametrizes deformations with the, I didn't give it a name, but I should give it a name. Let's say with ce fixed, or let's say fixed, so the curve. Other crossing points can open up, but this one stays fixed. Then t is the intersection over all these divisors. This is a divisor because I'm just putting one constraint on my deformation, OK? And all of this is unobstructed by the classical because we're dealing with curves. OK, so now let me draw a picture. Yeah, we have to come up with that. So I'll draw a genus two picture. And my artistry is not very good, I apologize. Then I draw the same thing again here. So imagine two sort of curious shaped objects kissing each other in three places, OK? So this is C naught. And what is it? C naught? Well, it's the graph should have two vertices and three edges, right? So that was easy to draw. Gamma is just this, OK? Two vertices and three edges. But when I look at the versatile deformation, I just it becomes the usual genus two picture here. And so what have I done? I've squeezed some vanishing cycles, that one. So these are vanishing cycles. And so what I've drawn here is C naught. And let's call this C, let's say, S naught for some base point. So I'll take a base point in here, which is away from. So S naught is not in any of the divisors. So it corresponds to a curve with no singularities, OK? And then I degenerate. So that's a classical, well understood picture. And some remarks, first of all, we have a specialization map, which we call SP, from the homology of the general fellow. So we call it CS naught. That's the smooth curve, say with rational coefficients, to the homology of the singular. And it's surject. First of all, the existence of such a map is classical. It amounts to the statement that in the fibers here, if I take my singular fellow, I can always find a tubular neighborhood, which is a deformation retract. And then I can imagine my smooth guy living in that neighborhood. So I can map the homology of the smooth guy to the homology of the neighborhood, and then via the deformation retract, it maps to the homology of the special fiber. So that's a classical game. And the fact that it's symmetric, you can just see it. Because you're looking at closed paths in here, and they lift to paths here, which end at the vanishing cycle. But then I can just go around the vanishing cycle a little bit. And it just amounts to saying the vanishing cycles are connected. It's easy to see that it's a surjection. Now, the second fact, so this is the first fact, that it's a surjection. The second fact, if I write a to be the kernel of the specialization. So that's the space spanned by the vanishing cycles. Notice the vanishing cycles are not linearly independent. Because when we have two irreducible components, we get a relation between the vanishing cycles. So the vanishing cycles are not linearly independent. Here, there are three of them. But the A is the space spanned by those. And the assertion is, what's the assertion? That A is a maximal isotropic subspace. Now there, you have to think a little bit. You have to convince yourself, first of all, it's isotropic. Because if I get close enough to here, then the vanishing cycles are going to be forcibly separate from each other. Because they live in little neighborhoods of the points. And the points are separate. So the vanishing cycles are clearly separate from each other, which means that the pairing between two of them will always be 0. And the fact that it's maximal isotropic just simply becomes, because this is surjective, and this has dimension g, and this has dimension 2g, then A has dimension g. So the reason is that just the dimension of A is equal to g. But you see on the first H1, the quadratic form, which is? It's just the intersection, the alternating. It's an alternating form on H lower 1 of the curve. It is physically just intersection, yes, with orientation. So let's, I still have enough time, I think. Yeah. So I now want to talk about the Picard-Lefschitz transformation. So 3 is Picard-Lefschitz. And the Picard-Lefschitz is like this. If I write Ae for the vanishing cycle associated to E in E, remember that these guys index the points here, the bad points, the singular points. And so for each of those, I have a circle that is contracting to it. So that gives me the vanishing cycle, which I call Ae. Then the Picard-Lefschitz says that if I look at the effect on the homology, I get by winding around, so if I wind around De, the divisor De, associated to that particular E, then what happens is a general one-cycle B goes to B. There's an issue of orientation, but let me say plus. B, the intersection of B with Ae, Ae times Ae. There's an issue. You've got to get the sign right. Is it minus? Luke says minus. So there is also a convention of pi 1 of the circle. Yeah, I mean, we have to figure out which way we're winding here. Yeah, so it's really a. It's not an important point. So I'll just say plus or minus. OK, so this is a classical familiar fact. So if I call this transformation, let's really call this Le of B, so the linear form. Then we know that Ne, which is Le minus the identity, which is also the log of Le. And it just sends, so Ne of B is just the intersection number B, Ae times Ae. So this is all, again, familiar stuff. And the nilfoten orbit. Can you use a blackboard on the right? Is one more blackboard? Yeah, but somehow I'm into the logic of putting it on a big blackboard here. So let's see if I can do it without covering anything up here. I'll bring this one down. Let's see if this works. Then the nilfoten orbit is just the collection of all endomorphisms of the form sum over the edges, Te, so some real non-negative real constant times this Ne. And so we maybe call this N. It's just the collection of all these things. And it's easy to see that, in fact, N is nilfoten. Perhaps quite obvious, but you have to think about it a little bit. Each of the Ne's are, I mean, not just nilfoten, it's, in fact, square zero. But I mean, is it square zero? It's certainly no problem. Yeah, it's certainly no problem. Let me not get in trouble. That is kind of part of the different. Yeah, because you see that physically, again, it's this issue that because the vanishing cycles are all physically disjoint, they don't, if I apply the. So the product of different ends is zero. So the product of different ends is zero. It's actually square zero, yeah. OK, that's the kind of thing that's difficult to think when you're on your feet. It's our period also on which space. Yeah, this is exactly the point. I'm glad you said that, because this is exactly the point I was going to say. These are nilfoten as anamorphisms of H1 of the smooth CS0. But there's another way you can think about these things, which is to, as follows, you can write H1 of the graph. That is the same as H1 of the singular curve, CS0. And that's isomorphic to H1 of the smooth curve, modulo this a, which remember was the maximum isotropic subspace spanned by the vanishing cycles. And NE, any one of these NEs kills all of A, so it gives a map from this quotient. And of course, the image is A, which is isomorphic to H1 of CS modulo A dual. I don't claim these things are maybe spring to mind. But this is then isomorphic to H1 of gamma dual. So for each edge, then, we get a map from this g-dimensional vector space to its dual. That is, we get a quadratic form. And the nice fact, which is not hard, but it's a little exercise, so I call it a proposition, is that this NE is equal to, and I think I screwed up by not giving it a name, is equal to ME, which was, so remember, I had H1 of gamma inside, let's say, R, the edges of gamma. And then for each edge, I could project to R, and this gave me a functional that I call E-dual, and ME was simply associated to the quadratic form E-dual squared. And the proposition is that this symmetric matrix or quadratic form is the same as this one that comes from the geometry. It's not hard, but it takes a little effort. OK, so with that in mind, we want to relate the first and second semantic polynomials, which came out of just an abstract discussion of the graph, the linear algebra associated to the graph, to the limits of heights on these family of curves. Now to talk about heights, we need to, oh my god, we're out of time. So I'm going to go five minutes over with the chairman. OK, the chairman can start his watch. So I need to talk about heights. So let me at least say enough to make the statement. So I consider now, let me draw the picture. Maybe a good way to do here. So here is my space S. Here is the bad fiber, and here is C0, the singular curve. And so then here is a smooth curve. This is the point S0. So what I want to do, say again. About S0, it's disconnected? No, no, no. No, no, it's connected. So above, so I want to add parameters. And when I add parameters, that will enable me to enrich the picture by adding some sections. So I will add some sections here. So these are sections. So I have sections. So sections. We call sections, let's say mu, ui. And then I control very carefully how the sections meet at infinity. So what data do I have here? Suppose that this section meets this component. Well, this component, remember, the components are indexed by the vertices. So this is the v component, so for v vertex of gamma. So what I do is two things I want to deduce from this collection of sections, which I think of as a family of zero cycles. In fact, I think of as two families of zero cycles. I want to deduce external momenta at infinity. Now external momenta are linear combinations of the vertices with coefficients in Rd. So what I have to do in the first instance is I have to couple these sections to the vector space Rd. So I couple the sections to Rd. Now what does that mean? It doesn't make any sense. But what I'm interested in is the height. So I write down a sum. I don't know what that was, but whatever it was, it's gone forever. I write down the height, which is a sum Rv. Let's call it mu, what do I want to say? Ri, let's say mu i, where the mu i's are the sections. So let me call this, let's say, a. And I'll take another one, let's say, Ri prime, or rj prime mu j. And I'll call this, let's say, a prime. And I look at the height, which is a and a prime. And because I want these things to yield external momenta, I have a constraint that the sum of the Ri's should be 0. And similarly, here, the sum of the rj primes equals 0, which is perfect. Because to talk about a height, I need to talk about zero cycles of degree 0. So I don't have time to explain this idea of coupling to a vector space, but it's not hard. Once you know how to define heights, it's easy to see how to couple it to a vector space. And then the theorem. What sort of, are you talking about? Yeah. Sorry, is it some sort of arithmetic thing? Yes, I mean, it is the. No, no, no, no. This is the height of, if I have two zero cycles of degree 0, which are disjoints, I have always defined. Essentially, you can compute it by taking differential forms with log poles on A and integrating over chains here. So this is the classical height. I'm sorry, I did my time badly. But the theorem is going to be that if we look back, and if I preserved it, and I didn't, that the e exponential of i psi phi over psi is the limit as a certain parameter, alpha naught, which I didn't really have time to explain, as alpha naught goes to 0, which essentially amounts to saying, if we look at the structure of the base s. So the base s we can think of as a product of copies of GM, so like punctured disks. And if we imagine the parameters as all going to 0, then this is what this alpha naught is saying. Then this exponential here is a limit, exponential of i height a a prime, where a and a prime are both zero cycles corresponding to the given external momentum here. So this is a function of external momentum, so a and a prime are zero cycles corresponding to a given p in the sense that I explained that they say they cross the right vertices with the right values of the spacetime at those points. So the bottom line is then that the term that occurs in the amplitude is a limit of heights. Now, say again? Alpha zero. Yeah. So alpha zero is measuring, I don't really have time to go into detail, but alpha zero is measuring how we are approaching the t. So remember we have s and we have t. So I'm thinking of s minus t as being something like a product of punctured disks. And alpha zero is measuring how fast we are approaching the punctures. But the divisor is a divisor. Yeah, it's a divisor of normal crossings, and we are approaching simultaneously all the parameters at a given speed. I'm sorry, this is on the web and in archives, so you can get the details. But I think I'd better stop. So when you have this one section, this several sections, so they cut each component in certain lines. What is the relation of this to what those numbers are related to? How they are related to what you have? Yeah, so yeah, exactly they are. Let me just say a word about that. Is OK, I'll put it over here. Remember, everything is a function of the external momentum, which are in r d v 0. And v indexes the p1 v's, the components. So for each v, for each little v in v, I give myself a section. So I have a section, let's call it mu sub v. And this section meets p v, but that's it. It just meets p v. So it's this guy. But you have done several once. Yes, because yeah, that's a good point. And again, I'm sorry, I ran out of time. There's a technical point about the height, which we need to assume that the zero cycles in question are disjoint. So what I would like to write is a, a. But that doesn't make any sense. So what I do is I take an a prime, which also meets there with the same thing. So that's the idea. OK? Yeah. Yeah. That's a very naive question. So what is the relation between the height and the grain function? Because normally the way I would think about it is that I would have a function on my Riemann surface. Between two functions, we have the grain function. And I would take the degeneration limit the way you design it. Roughly speaking, they are the same. They are the same. The height is the grain's function. There are, in fact, I like to think of it in terms of heights because the philosophy here, what the mathematician wants to inject, are these height structures. And the heights are typical examples of real valued functions associated to variations of height structure. So this should be a general game when we have a degenerating variation of height structure. We should be able to get interesting amplitudes as integrals over the nilpotent orbit associated to this variation. So that's the idea. But you're right. I mean, that's why I say that you guys knew this. But you maybe didn't think of it from the point of view of variation of height structure. And you always need one parameter at physical? Yes. You don't want, yeah, this is actually, Professor Cato can be more precise about this than I can. But you don't want to let the various edges go at different speeds. Because then that way can be, if you choose the different speeds badly, you can get the limit that can be not what you want. But that's interesting because there's precisely what Springfair is telling you, as always, that's a prime. Yeah, sorry, I should have called, in fact, Alpha Prime. Yeah, I should have called it Alpha Prime. It is measuring the approach. We're proving that Springfair has only one dimension full parameter that is needed, in a sense. In a sense, I mean, it's the fact that you need one parameter is very striking. I'm still working with the box. I mean, I'm not sure. At this time, instead of this big dimensional S, you can use the one-dimensional situation. Yeah. So you have this map, so you move it from gamma to. Yeah, very specific. So you get the linear quality 4, or the negative 2, and the same inside. Well, it's not. I mean, it's a rank one. In one dimensional, yes, you get the curve of the semi-stable reduction. And this graph you get is just what's importantly called monotony bearing. So here, you work in the Q coefficient. What can be looked at more closely at the integral part? So then you have an iso-genic. So the cooper on this map is exactly the group of connected components, not the special fiber or the neural models. So I wonder if this entered into your picture. Not yet. Not yet, but. And also, I should say, coming to heights. So importantly, you have to conjecture that you have, more generally, an ambient variety A, and A is not the same. It's your A prime, so we have the bearing between those two connected components. So two, an ambient variety semi-stable reduction. So this is bearing, and you conjecture the bearing is perfect. So in fact, this was proven recently by, I think, a student of Cato. And but previously, there was some work of Bosch. And Bosch considered, so, prove it in some cases, and precisely using height, bearing. So I wonder how is this. Well, let me just say a little bit. Those are appeared in the film, I think. But yeah, so this story, this sign of story, has any significance to or relevance. We have to ask the physicists. From my point of view, I notice just one remark that is suggested by what you said, that this is absolutely the most degenerate. We're going to the worst place. Why should physics be preoccupied with the worst place? Is there any physics to be found by just degenerating a little bit? And so there's lots of possibility for interplay between the math and the physics. But just to make sure that the one dimension, if you can, didn't leave. Well, here, we need the higher dimensional situation, because we need, ultimately, our integral is going to be over the full nil-poten orbit, not just over one guy. But why do you want dimension that is quite fascinating? Took me questions, took me questions. Here, so was it about musless? Yes, yes. I'm sorry, I should have said this is all massless. But it's a nice question. What happens if we put masses in? I haven't thought. And second, have you done or can you make a version of this for open strings, when you have oriented final graphs and you replace vertices by disks and edges by strips? Edges by strips. Well. That's open strings. I mean, muscle symmetry. Yeah, yeah. I could repeat, but I'm not sure it would be correct. Not by me, in any case. Yeah, and not by my group. Well, you said it's a good thing for the real grass money, doesn't it, because it gets rid of all these integrals entirely? What's our senior report by Connie saying? Well, Connie said a lot of things. Let me, I mean, let's just say that the chain of integration is replaced by irreducible components of the real grass money. End of story. For me, I mean, I can't go further than that. Is that just because of that?