 Thank you very much for the invitation. It's a pleasure to be here. So I want to follow up on what Marcus was doing this morning. Basically, the first half of my talk, I'll just give a little bit more detail on some aspects of resurgence and how it connects to a synthetics. And now I want to use these synthetics in the second half in examples in strings here. And this is work that's been going on for a long time with a lot of people. Anyways, so as Marcus was telling us this morning, perturbative series are often a synthetic. And if we want to compute some quantity F in these examples that were associated to energies in quantum mechanics or solutions of ordinary differential equations, we can try out some perturbative expansions here around zydeclase infinity. So the first one of the questions that we'd like to answer is, of course, we know that the asymptoticness means that these coefficients at leading order grow vectorially fast is if we can precisely characterize these asymptotic growth. Meaning, can we say anything about the so-leading contributions, the ones that are slower than factorial? That's one aspect of understanding resurgence and synthetics. The other thing, of course, is that, by definition, a synthetic series means they have zero rate of convergence. So I can't basically compute anything dissociated singularities on a complex plane, as has been discussed. But at the end of the day, if I'm computing some physical observable, I would like to have a number and not an infinity. So how can we make sense out of perturbation theory? And that's the second question that we'd like to answer. How can we do this? OK, we've already seen in most of the talks today that the object that starts this business is the Borel transform. Basically, what the Borel transform does is remove the factorial. So these fg, they have leading factorial growth. It turns out we'll be more explicit about this in the following. They have subleading exponential growth. So the Borel transform, by getting off the factorial, just leaves up a series which has exponential growth. And this can be analytically continued throughout the complex plane. And then the number should come out of Borel resumation, which is basically the inverse Borel transform. It's basically Laplace transform. And that integration over a ray, which is written there, is very nice, as long as we don't cross singularities of the Borel transform. And there will always be singularities of the Borel transform, because the starting series was a synthetic. Basically, a synthetic implies singularities of Borel. So how can we go about this? So the first question we can ask is what class of singularities can we find on Borel transforms? And there's a very broad definition of what we can find, which we will not use. And then there's a simple one. The very broad is the definition of a certain function, which basically means that our research is in the synthetic series. A formal synthetic series is a resurgence. If its Borel transform has endless analytic continuation. This is very broad if we want to make some formula explicit. A slightly simpler definition, which in fact has a name simple, is to consider the class of functions, or of power series, formal power series, whose Borel singularities restrict the simple poles and logarithmic branch points. That's what we shall consider. In fact, in most of the formula I will show, I will not even put simple poles. I'll just put logarithmic branch points. So here's an example. Amiga is some simple singularity. S is a complex number. And phi amiga is some other sector. What do I mean by some other sector? It means that around amiga, this expression I'm writing here in blue, is what happens to the Borel transform around the amiga. We have a logarithmic branch point. That's the log, which is there. And in front of it, I may have some holomorphic function. So I'll understand this holomorphic function as the Borel transform of some other sector. And I'll denote that some other sector by phi omega. If we do this, now we can be very precise about what happens when you cross the stokes line. The stokes line, of course, was the line wherever when I'm rotating that theta in the Borel resumption when I cross the singularity. Because now I know how to classify singularities for this class of functions. So how can we do that? Here's the picture. I have a line with direction theta, along which I have many singularities. And I'm avoiding them by doing the Borel resumption slightly to the left. That's the theta plus, or slightly to the right. That's the theta minus. What's the difference? Well, the difference is now very easy to compute. If I assume that next around each singularity, what I have is just this structure. And then all I have to do is plug in and do the calculation. It's not too hard to see that this will imply this continuity of F, the energy and quantum mechanics of the solution to the ordinary differential equation, whatever, from these examples from this morning, that I want to compute. And this discontinuity, perhaps I can use this, the term here, this is Z here. Remember, we're doing an expansion around Z equals infinity. So that's the usual non-analytic. That's non-analytic around Z equals infinity. The non-analytic contribution, I have these complex numbers, which if you want, you can call them as the Borel residues. If I were to put the simple pole, they would appear with the simple pole. And then I have these other sectors. So you see that these sectors must be included in the full solution. The perturbative series is not enough. F was the perturbative series I was starting off with. That doesn't have all the information, because as I start going around in the Borel plane, these guys will show up. And that's essentially what leads to trans-series and through resurgence. That's the basic idea. And of course, a little bit in this context, that's what I will want to discuss now. How to, what kind of objects are important in resurgence and how they can allow us to study the asymptotics to know what's behind that factorial growth. And then that will be the first half. And then in the second half, we'll discuss how to apply these things in string theory, and we'll see how far we can go. Some basics. The idea is very basic on how to define a general trans-series. Basically, analytic functions are described by power series. If I want to describe general non-analytic functions, I just augment power series with non-analytic terms. The idea is very, very simple. But it can lead to extremely complicated objects. It's not too hard to see. Now I'm doing things around x equals 0. But if I start iterating the standard non-analytic term, I can get arbitrary weird things, which are extremely suppressed and hard to see. Or I can put logs, or I can combine everything, and so on. Interestingly enough, although all this is allowed, in the examples that we've studied at least so far in string theory, we just find the standard exponential minus 1 over x and a few logs. So it doesn't mean that these guys are not there. At least we have not seen them yet. So what's the idea? We put everybody together. This is what I would call a one-permeter trans series. So f0 there would be the perturbative series I started off with. And then I can have instanton sectors with instanton action a and then some asymptotic series around each sector. I'll denote the asymptotic series just by capital phi. And this is double perturbative expansion, since that it's perturbative not only in the original coupling, but also in this non-analytic contribution. Sigma here is a trans series parameter, which is an instanton counting parameter, but perhaps more interesting for what we'll discuss later on, it's sort of parameterizing a choice of boundary conditions. So if this is sort of an answer for a solution to a non-linear first order differential equation, the trans series, you can think of it as the general solution. And particular solutions where I specify boundary conditions are particularly choices of sigma. Now this trans series yield very general solutions to non-linear systems because I just plug things into say this non-linear differential equation and I can compute all these coefficients. They basically get the hierarchy of recursive equations that tell me what all these guys are. And the resurgence is the statement that that different instanton number and that different loop number, loop level, whatever you want to call it, things relate to each other. So that's why there's this slogan that you can obtain non-perturbative data out of perturbative data. It's this relation in between all these coefficients. All right, the discontinuity, now it's just the only technical things I want to discuss is this concept of the alien derivative and why the discontinuity is not good. So at least not good enough. Remember, when I cross the stokes line, when I have simple singularities, it's easy to uncover an operator which is the discontinuity across the stokes line. Well we can do better because there's a more interesting object than the discontinuity which has the property of being a derivation, so-called alien derivation in the terminology of a column. So how do we improve the discontinuity to get the derivation? Basically we just rewrite the discontinuity. This was the difference between the left and the right resumations as an automorphism in this way. So basically it's automorphism relating the left and right resumations. I'll call it the stokes automorphism and then noted with this funny notation which is standard in the literature. So how can I get a derivative of what it is just by taking the log, right? Automorphism are explanations of derivative so I can define the so-called pointed alien derivative. It again has a funny notation and a funny name in this way. Notice that it's improving the discontinuity with iterations of the discontinuity over there. And in fact, there is another definition. There's a direct definition of this operator as a specific sort of weighted average that if I want to know, let's say, at some point, at some point what would be the alien derivative there, I sort of weight all possible contours that take me up to there. And we're doing analytic continuation of the Burrell transform. But that's too technical and it doesn't really matter for just a comment for what I want to tell you. Automorphism are explanations of derivative. I'm sorry? Automorphism are explanations of derivative. I will be more specific when I will write down the trans-series. For the moment, just think about something that's taking you from the left to the right to resummation. So it basically gets a resummation of a formal series to the right and the resummation of the formal power series to the left. But it will act on the trans-series parameter. I'll show you these in two slides. It will be more clear. So along the stokes line, we can decompose this object at all the possible singularities and define the derivative at each singularity, the standard delin derivative just with the non-analytic term. This will also appear in the next slide. All right, so what we can do with these things. A key property of this object is that it commutes with the standard derivative. So if f, if our trans-series, is an asset for the solution of some first-order nonlinear ODE, then these two objects, because this is the derivative with respect to the trans-series parameter, and this is just the alien derivative which has that property, which this one also has, they of course satisfy the same linear ordinary differential equation, which means they must be proportional to each other. And this is known as the bridge equation, specifying a bridge between alien derivatives and standard derivatives. So now I kind of know how to compute these guys if I know how to take standard derivatives. It doesn't seem too hard. And in fact, that's what happens. So along the Stokes line, where I have singularities at k times a, regularity and homogeneity say that the proportionality factor is actually an object of this form. It has to have a certain power sigma times some numbers. When I write down for the components of the trans-series, let me show them again, the components of the trans-series are the power-series which are there. Can you recall where sigma comes from? Yes. Sigma, I just introduce it there as an object which is counting if you want the powers of the non-analytic term. And as I try to kind of motivate, it's parameterizing a choice of boundary conditions for this ordinary differential equation that you may have. The alien derivative acting on each of the components is just a number times n plus k. So you see, when I'm acting on the nth factor, at the kth singularity, I get the nth plus k sector. And the only thing I don't know here are these numbers s. They're called analytic invariants. That's their name. Let me rewrite this formula. And this is sort of the punchline of resurgence. That's the formula again. So basically what this is saying is that I've completed now all alien derivatives. Equivalently, I've completed all Borel singularities. This is the structure I have. That object is a derivative in the sense that it satisfies Laibniz's rule. So it's a derivative with respect on the form of power-series when they just are multiplied with the standard product. Or if you want, it could be a sort of a derivative acting on the Borel transforms when they're basically convoluted. So it satisfies Laibniz's rule, either through the standard product or through the convolution product. And the statement is this. Let me read it for you. At the kth singularity of the Borel or the phi and instant on sector, one finds the resurgence of the Borel or phi and plus k instant on sector. So I've completed everything. The information is encoded in terms of the analytic invariants or the Borel residues. They're not equal, but they relate to each other. There's some formula that can take from one to the other. And here's the picture. So this is basically all you need to have in mind about all this that I've just told you. You can think of the trans series as a sort of a chain. Here's the perturbative, one instant on, so on. And these alien derivatives, these objects here, induce specific motions on the chain. There's basically only one way to move forward, but there's many ways to move backwards. And it's these combinatorics of the difference between how can I move forward and in how many different ways can I move backwards that will allow me to write down some general formula for a synthetics. Basically, in a punchline, this is all you need to retain from the previous little bit technical slides. There's a bonus as well. And now I hope we'll answer your question on what the automorphism was. Is that this formalism gives me a full description of Stokes' phenomenon in one expression. The Stokes' automorphism, remember, by definition was the exponential of the alien derivative. But now the bridge equation tells me that I can rewrite the alien derivative as a regular derivative. And here's what happens when theta is 0. So when I move forward, I only have one possibility. This is the only allowed forward motion in the previous picture. And this, of course, creates an automorphism, which is a translation along the sigma direction. And this is a clear illustration of Stokes' phenomenon, right? If I had sigma equals 0, all I had was my power series. And I could not see exponentially suppressed stuff. But upon Stokes' phenomenon, I must grab them and take them along for the ride, because things which are small in some regions of the complex plane can be big elsewhere, and they make change the physics of the problem we're studying. But that's not the only Stokes' phenomenon there is. There's also a long pie. A long pie, in fact, I have a lot of singularities. And when I just play this game, the automorphism is more complicated. It's generated by the 1-permeter flow of this vector field here. But you can write it down. Anyway, so Stokes' phenomenon comes for free in this business. The other question that I had was, how can I resum things? If these objects are a synthetic, how can I get numbers out? Here's a resumption in a nutshell. So in string theory, what I will want to compute will be the free energy or the partition function. It's known that the free energy has a genus expansion that looks like that's Gs is the string coupling. T is some module I associated with whatever geometry I'm looking at. And this goes essentially associated to different genus of Riemann surfaces that define for you the perturbative expansion of string theory. They have large order behavior. This is known since the early 90s. 2G factorial. So the topological genus expansion is a synthetic. And we will need to complete it somehow by including some instanton sectors in the trend series. These building blocks of the trend series are newer synthetic series associated to instanton sectors, which have the usual structure. They look like that. If we want to have numbers out, what's the game to play? It goes in a couple of steps. The first step is Borel resumption, where I must make sure that theta is not hitting a stokes line. In practice, what you actually do is Borel padeh. What do I mean by this? We don't know all these terms. We can generate them iteratively if you want, but we don't have closed form expressions for them. So if we don't know all of them, we cannot exactly pinpoint what the Borel transform is. But we can approximate it. Of course, we can approximate it as different ways. The most interesting way to approximate it here is with padeh approximants, because they're rational functions. And they will give us a sort of a pictorial view of where these singularities, supposedly logarithmic singularities, if this is a simple case, will be, I'm sorry, the iPad just goes off if I just am not doing anything on it. Here we are. We want to see the singularities of this function, and the best way to do it is to approximate it by a rational function, by padeh approximants. In practice as well, this integral will be evaluated numerically, so it's not going to go all the way to infinity. But anyways, there's some approximations here that are under control, because they're well-known numerical approximations. And once we have all these objects, we need to assemble them back into the trend series. Here's the example of the trend series with instant and action edge, the string there, the string coupling. All these guys have been numerically evaluated, but I still need to say two things. First, what is sigma, and what happens when theta crosses the Stokes line? Here's the picture. The trend series will allow us to reach arbitrary coupling, even strong coupling if you want. And it will allow us to venture into the complex plane if we properly incorporate Stokes phenomenon. This has been done in a string theory context recently. This is the same picture I showed you before. What happens when you cross a Stokes line? So what happens when you cross a Stokes line, we know is that sigma is going to jump by the Stokes constant. So we can turn on or off an infinite set of multi-instantan corrections. This is very important in some of these examples, because these exponentially suppressed contributions can become exponentially enhanced somewhere else. So when you're describing the free energy or the partition function of some string models, it could be that you have regions of the complex plane, of the coupling, where the dominance is no longer associated to the perturbative series you started off with, but to instantans. And in fact, it happens in some of those examples. So that's the idea of resumption. Now, let me tell you about what's the relation of these ideas to the synthetics, because somehow that's how we're going to test that what we're doing is valid. We come up with suggestions of trend series to describe the free energy or the partition function of some string theory. And we want to make sure, is this suggestion of the trend series good or bad? And we can test it by resorting to a synthetics. The idea is very simple. It's just basically Cauchy theorem. Cauchy theorem tells me that I can rewrite some function f, which has some branch cuts. Here I'm putting along some direction theta, as it's discontinuity around theta plus some contribution at infinity. In most situations, in fact, in all that we have been addressing, the contribution at infinity does not, the vanish is basically, does not contribute to the problem. So we have a connection between, if you want, if I just plug here the perturbative expansion for the free energy. I know that this continuity is going to be given to me by this basically computation of all these alien derivatives. And it's given in terms of the other sectors, the instanton sectors, I have a connection between perturbative and non-perturbative data. Let me show you some formulae and some pictures. Here's the perturbative sector. I can compute exactly this continuity along 0, because I know all alien derivatives. It's that quantity. And if I just plug into the Cauchy dispersion relation, I get a closed form expression. Let's look at the first few terms, again, in the spirit of what we've been doing this afternoon. I have the factorial growth. I have the subliminal exponential growth controlled by the instanton action. I have the proportionality factor given by the Stokes constant. And then I got the one instanton contributions, one loop, two loops, and so on, which comes with powers of 1 over g. Then I got further exponentially suppressed stuff. Here, 2 to the minus g. This is the two instantons. And the loops around the two instantons, and so on. So in a picture, what this is saying is that if I know the instanton sectors, I can predict the large order growth of my perturbative series. This is kind of obvious. But what we'd like to do is sort of the reverse way. I can't predict the large order behavior if I know what's going on with the multi-instanton sectors. But I can also just, by looking at this series, of perturbative components, I can decompose it into its powers of 1 over g, exponential factors integer to minus g, and so on, and try to extract information about all those non-perturbative sectors. Of course, there's a structural need for the Stokes constants. I need to know what they are. And in some cases, we can only access them numerically. We don't have an analytic first principle way of computing them. In some others, we do. The advantage here is that this picture doesn't hold only for the perturbative sector. It holds for arbitrary multi-instanton sectors. Here's an example. Of course, now remember this picture I had of the trend series before, where there were allowed motions going back and forth. Here's the forward guys. They're going to be easy. Here's the backward guys. Now remember, there were lots of ways to go back, so the combinatorics is a little bit more intricate. And in fact, if I write the formula for that, I'm not going to show you all the combinatorics. So the forward guys, they have a similar expression to what the perturbative had, but the backward guys, I'm not showing all of them. There are three dots there. They're given by some over partitions, which is a little bit harder to pinpoint. And all Stokes constants will appear now. We can go to more complicated problems where there's more than one instanton action. Instead of having a chain with allowed motions, I now have a lattice with more complicated allowed motions. Basically, this is a lattice for a case of two instanton actions. And the first quadrant, I cannot access it with these motions dictated by the alien derivative, but sort of the second, third, fourth, I can get there. Here I'm just showing some elementary motions, but there's more because I can iterate them. And the combinatorics is gonna be even more complicated. Let me show you some formula just for the flavor of it. Here's the case where I have two instanton actions, A and minus A. And immediately at perturbative level, I have these continuities around zero and pi. I'll have nasty formulas. Let's not mind about the big formula. Let's just look at the leading terms. The interesting thing here is that in many examples, these terms here, the F terms, the one slash zero and zero slash one instantons at one loop, they have some symmetry. So they basically relate to each other. And we can decide, if you just put it on the computer, if what I'm looking at is a problem that only has an instanton action or has perhaps two instanton actions organized like this. The difference, of course, is that if this term is not there, I just see in the computer dot, dot, dot, dot, coming out of large order tests. But if most of them are there, they're gonna be oscillated. So these things are very clean to be seen in examples. And they lead to pictures which look like that. Basically, now I have a green and a brown contribution to the larger of the behavior. But if I want, I can also imagine these errors going the reverse direction and extracting data for these sectors. And I'm not even, we have formula for generic cases that I'm not even gonna show you, but you can do all of these cases. Now, what's gonna happen is that there's gonna be, in the asymptotics of multi-instantons, there's gonna be novelties in these asymptotics. It's not only that we're gonna have g-factorial growth. There are some phenomena that I will not discuss associated to the resonance that give rise to g-factorial log g growth. They're actually dominant with compared to that. So they're really easy to see in computer. And we can try to see if these asymptotics hold in examples, and they've been found in many examples here. I'm just putting on, this is not chronological. This is just from easier to harder, at least in my humble opinion. And it's been addressing lots of work. What I'll be focusing on in the example of strings here, it's gonna be topological strings in local P2. I will not say anything about all the other examples. And we're gonna see how that works. Generic cases might be harder. This is the example of would be very complicated situation where the asymptotics, hopefully, I'll never have to work out. All right, so that kind of concludes the first half. So we have given this formalism of the alien derivative, which is a little bit technical. We can access, we can compute all required information on the discontinuities of my trans series. And I can write down quite general formulae for the asymptotics that I can use to check if the trans series for string theory or whatever other problem is working fine. And then try eventually to resummit and to obtain some information at non-perturbative values of the string coupling. And that's what I'll try to show you now in the second half. The example I want to discuss is topological string theory in local P2. Let me just make some comments generic before starting with the example of local P2. I will be dealing with the B model on some local clavial. Most cases that this story applies, they're mirrored to some torque, such that the information is encoded in the mirror curve. Basically by computing periods in the mirror curve, I will compute most of the information I need. As I've said a couple of slides ago, this is what the free energy of the string theory will look like. Here's the string coupling. Here's some moduli associated to geometry. In this case, the complex structure moduli. How do I compute these guys? At the end of the day, I will need data, perturbative data to start off with and then try to check what's its asymptotics, how can I resummit and so on. Here's the idea, I'm not gonna show how this comes about. The idea is that basically these quantities, they're not actually holomorphic in the complex structure moduli. There's a holomorphic anomaly, which is, if you want an idea of what it comes about, if I try to compute the anti-holomorphic derivative, it's given by an integration of moduli spaces of genus U-surface. And while the integrand is a total derivative, there is a boundary to this space, which makes that this is non-zero. There's a very simple picture of this. What do you mean by non-trivial information over C-Y-G-Y? Non-trivial information, where's that? Oh no, I just mean that if you want to compute these objects, you will be well done by just looking at the mirror curve. Of what, of the Riemann surface? No, no, no, no, no, please don't, okay, there's two Riemann surfaces here. So the first thing is that if I have a clavial, which is mirrored to some toric 3-fold, there's a mirror, I'm sorry, not mirror. There's a surface, which is the mirror curve. Forget about that. Then there's another thing, which is I'm doing a genus calculation to compute the perturbative expansion of the string theory. And then there's a genus, there's a genus G surface associated with order of perturbation theory. Don't confuse the two surfaces. So I was saying the reason why the derivative is not zero is because this mod light space has a boundary. And the boundary is of course when, here's an example, when there's a degeneration, it basically corresponds to the moduli where the surface degenerates to lower genus. There's one way to have a degeneration which is still a connected case. And here's an example where you have a degeneration to a disconnected case. And this leads to the holomorphic anomaly equations. Basically, the connected degeneration gives you basically an operator acting on a G minus one and the disconnected one, there's several different ways in which you can get it. The exact way to write this formula is written here. So the anti-holomorphic dependence is included in an object, which is known as a propagator. If you want, it's a potential for the occult couplings in the problem. And these derivatives here are with respect to Z to the complex structure modules. And there can be either regular derivatives of covariance derivatives. And this equation has been known for, this paper I think is from 92 BCOV. And it's been solved perturbatively to very high orders in this example. That's the example we're gonna be focusing on, economical bundle over P2, known as local P2 colloquially. And I'm actually not sure to how much orders these guys went, but probably the same that we did, which was about 112. So we, this is actually an effective way to generate perturbative data. So you can get lots of information and try to play with it. What happens if I look at the perturbative series, it's got dependence on holomorphic and anti-holomorphic coordinates. What's its large order behavior? This is the first question you will have if you generate data up to genus 112. How is it growing? There's two possibilities here. There will be an instant on action controlling this growth. And the instant on action might be holomorphic. And so, I'm sorry, this is not what I meant, I meant the other way. The instant on action might be have an anti-holomorphic contribution to explain the growth of these guys. Or it might still be holomorphic and explain the growth of these guys. And by the way, by the upper arrow, I'm just talking about the cases where there's large endualities for the matrix model where I just have to deal with holomorphic quantities. How can I know how this works? So one way. What does this arrow means? So this arrow here just means that there might be, in some examples, you have large endualities to matrix models where the anti-holomorphic dependence goes away. That's what the upper arrow means. These arrows going up, they just mean what is controlling the large order behavior. Is it an instant on action which just has holomorphic dependence, or is it an instant on action that will have, this is a question mark. Does the dots, I'm not, at this stage I'm just telling you an example of a question that you would want to make in two slides I'll answer it. And the question is how am I going to be able to answer this question? And our suggestion is to rewrite the holomorphic anomaly equations for the partition function instead. You see, the holomorphic anomaly equations, they're basically recursive equations here, genus G, stuff at genus minus one, genus minus H. Notice the end points of the sum here. They don't allow for H to be zero or G. How would I put a trend series answer here? It's not obvious. But if I could somehow rewrite them for the partition function, I could then solve naturally with a trend series answer. This is actually very simple to do. Why they're over in the previous, over in derivatives over F0, G minus one and ordinary derivative over F. It doesn't matter. For all, if you don't know this story, you don't need to worry about that. All you need to worry about or all you need for the following is to know that this is a recursive gen, it's a recursive formula that if I know the previous guys, I can generate the next guy. So it's a way to generate the perturbative expansion. The details are not important for the purpose of this talk at this stage. What I want to know is if I could find analog expressions to this one, but instead of giving me recursive expressions to generate data in the perturbative case, if I could get recursive expressions to generate data for non-perturbative sectors in the trend series, basically for these guys here. It's a Cauchy problem that you have here. Sorry. In the holomorphic anomaly, it's a recursive Cauchy problem. You have to set up the genus zero and genus one, yes. And then you can do everything from that. Data and the second order. Second order? No, first order. The right hand side. Yes, but you're so for S. Basically you're going to compute. But there's d by dj of K minus one. Yes, but what you're going to compute is the dependence of these guys in this S's. I will show you. But the right hand side. There's two derivatives, I agree. Two derivatives. So you expect to be j over two. No, the two will be j over one. Oh, it depends on what you call j over one. Okay. Yes, yes, okay. Answer? The one derivative which you're using is Z bar. Yes, yes. The derivative which you're using is Z. It's like in the heat caramel. No, no, no, no. The only derivative in Z bar is this one here. These ones are derivatives in Z. That's what I just said. Okay. In one variable there is only one derivative. That's what I said. And in other one, so it's a what? Perth or the differential equation in something that it's something like. It's an equation. So you should worry if you want to adopt this. Let's rewrite it for Z. Here it is. Here's an example where the complex structure modelized space as I mentioned one. So basically there are single Z and single S. That's the equation for Z. It is heat equation like, as you were saying, but that's not enough. If you just put the heat equation thing, you don't get the exact homomorphic nominal equations. So you need to have corrections, which basically I quote here as initial data if you want, but it's quote unquote. These are corrections associated to genus zero and genus one contributions. And basically what they are doing is making sure that the limits in the sum are correct. If you just put the heat equation H will start at zero and it will go all the way up to G. Which is not what you need. In fact, if it goes all the way up to G, you no longer have a recursive equation here. But that's just a minor detail. That's the thing that you get. And now you have an equation for Z. So you can try to plug in Z as, well, let me say something before. Sorry, it was right here. If I just put Z as exponential of F zero, I get back the homomorphic nominal equations. Now what I want to do is to put Z as exponential of the trend series and see what will this create for me. And here's what it does. It will lead again to recursion now for the non-perturbative components for these guys here. And the first time in the recursion is this equation. That's the question I asked first. Will the instanton action be homomorphic or not? Here it's telling me that to tool. And that's good because that means that I can compute A as an appropriate combination of the periods in the geometry, periods on this miracle. And this is in parallel what had been conjectured in a slightly different example by these people. That should always be the case. It's very nice to see that in fact it is. And the equations that we get, what we dub the non-perturbative homomorphic nominal equations, they're written here for this case of mod light space of dimension one. So I still have the derivative there with respect to the anti-holographic dependence. And now if you want it's quote unquote, co-ventized because the derivative of the instanton action appears. Then I have a term. I'm not telling you what this curly D is. There's in fact there are several curly Ds. One of them is the second derivative that appeared earlier. And then there's sort of the quadratic piece. Now it's not only the derivatives of these guys multiplied, but also has contributions from the derivative of the instanton action. This will define this, what is non-perturbative trans-series components? I may be lost. I'm sorry. What is non-perturbative? It's anything where n is different from zero. So it will be the one, two, three and instanton contributions. And when n is zero it's just perturbative. That's the nomenclature here. So that's the answer. I can do this. I think, how long do I have? Okay, let me do it just a very short interlude that these things can be generalized for the refined holomorphic anomaly. Basically, I don't want to say a lot about this. It's just the case where instead of having just the G string that you have in the standard topological string, you end up with two parameters that they compute for you in across-off partition functions in omega backgrounds. There's also some holomorphic anomaly questions for this. And- Why it is anomaly you're considering? Why you're saying it is anomaly? Because naively you would have expected, as I said, oh well, sorry. Naively you could have expected that this derivative would be zero. And because it's not, you call it an anomaly. But if you want it, it's just nomenclature. The same procedure can be run here. I'm just gonna flash you two slides. I don't want to talk too much about this and go straight to the examples of local P2. It's not too hard to see that you can also write the master equation for the refined holomorphic anomaly, which when you choose epsilon one and epsilon two in the standard topological string limit gets you back the standard master equation. And if you take the so-called naively limit, it also gets you another master equation. And this allows you to compute non-perturbative versions of the refined recursion. Here in the case of in the NS limit, you see that the derivative with respect to weight has appeared, but I get a different set of operators there. But that's just a comment. I don't really want to go into that. What I want to continue is to say that the same way that I could have solved the perturbative topological string and compute solutions. These are polynomials in this variable s of the degree three, three g minus three. The non-perturbative topological string, I can crank the wheel in this recursion and generate data as well. Now, because of that term that appears next to the derivative in s, in the derivative of a, there's going to be exponential terms there. This doesn't happen in the refined NS limit. But again, that's just a comment. What we would like to do now is to know whether the solutions that I get, this one instant on, two instant on terms and so on are correct if these guess that this non-perturbative version of the holomorphic anomaly is the correct way to generate data is the correct guess or not. And so we're going to show this in the example of local P2. And basically I'm just going to show you plot stuff. Excuse me, may I ask a question? Sure. I thought that the statement that you guys were making, I learned from Marcos was that the limit when one epsilon goes to zero gives answer same as epsilon plus epsilon to go to zero plus non-perturbative. So basically, in the case where one epsilon is zero that contains both perturbative and non-perturbative answers. What's the logical strength? Am I right? That's not the claim I'm doing. I'm not sure which one. So the only claim that I'll be doing in the following concerns taking their sum to zero. That's all I'll say about. And then there is another one. There's the other one, which yeah, I can show. I thought that's it. There's this one here. Yeah, some clever way it was reconstructed that non-perturbative corrections already are known. This is a different story from, it's a different story. I don't know if you want to comment about that, Marcos, but it's a different story from the one I'm going to talk about today. Do you want to say something? Well, I mean, what we do is to do a non-perturbative definition. Ricardo is talking about constructing trans-series. Then there is another issue of how these two things compare, and this we also start. I will talk about the comparison. Come out from the DNA. So this is more generally, I'll say. Obviously, the DNA form of trans-series. I will comment about the comparison, but I will not say anything about his work. So let's see how that works in the case of local P2. So here's a check of instanton actions. They're going to be instanton actions. They're associated to periods in the geometry. And what are those periods going to be? So it turns out, not going to give you all the details, turns out that the good coordinate to use is not ZEV. It's going to be this coordinate psi, which, if you want, is the cubic root of ZEV. And there's going to be three conifold points at, basically, at cubic roots of unity in this variable psi. And via picard-fuchs and so on, you can compute instanton actions. You can compute out of these periods. And basically, here's the solution for you for the instanton actions around the conifold point. They're given by a massive formula, but whatever, it's what it is. And there are three of them. And here's a test. Let me try to tell you what kind of what I mean by a test. Remember this formula I showed you here? So I want to test, say, the instanton action here. I know this sequence up to genus, I don't know, 112, whatever we computed it. And I'm going to work out out of this sequence if I can extract A and how much the A that I extract out of this sequence, how much does it agree or disagree with my prediction coming from computing a period. And here's what it is. So basically, in green, it's what comes out of testing the sequence for the instanton action associated to the first conifold point. Basically, that one. Here, Tc is the flat coordinate around the conifold. Basically, for this function. And in blue is what that function is, and it's spot on. And then there's a region of psi where the 4 pi squared i instanton action kicks in. And it takes dominance. The results seem pretty confusing. There is, in fact, three instanton actions associated to these three conifold points. And here, I'm just plotting for you branch points and cuts of each of one of the conifold 1, conifold 2, conifold 3. But that's not the whole story. There's also, if you want, around the large radius point, there's also an action, which we can call the scalar action or the large radius action, whatever you want to call it, which is given basically by the mirror map given by a MyRsG function. That will also contribute. Let me show you how they contribute. In this plot, I am showing how the conifold and the scalar actions play against each other. In red, so what am I plotting here? So what I'm plotting here is out of the perturbative series, I construct a Pade approximate to the Borel transform. So the Pade approximate, remember, is a rational function. And if I know their singularities, I will sort of know where the cuts of these simple singularities will be located. And in black, the dots that you see all over, these dots are precisely the singularities of the Pade approximate to the Borel transform. Then in red, in fact, the colors are the analytic results. In red, I'm telling you, as I vary psi, notice that psi, the complex structure modulus is changing. I'm bringing it from 2.5 all the way down to 0.625. I'm plotting the trajectory in red that the one instant on action is doing. In green, the two instant on. And in purple, the scalar. And you can see that. Let's follow the conifold one. The dots, the singularities of the Pade approximate are just always there spot on. And you can follow with your eyes the same thing for the conifold two. The scalar, it's right there. And you see that it's contributing. And then when it basically, there's a region where it's gone. So basically what happens here is that we have an illustration of what is known as higher order Stokes phenomenon that sometimes also the Stokes constant themselves can jump and can take stuff out of contributing to the large order. This actually, it's very neat that these things happen also in this example. So we have some strong checks of all these instant on actions. Now we want to know, remember that the final thing that I did in the previous section was to compute one instant on and two instant on contributions. Let's check if this is also good or not. Here's a check of around the conifold one, the one instant on sector there. The one instant on sector should look like something like that. And this is testable at large order by using this sequence where I already know there's perturbative data. I know. Of course, G never goes to infinity. I'm limited by whatever the computer can give you. And this is previous guys that I've already checked. And I'm going to check the next guy. And I'm going to do this at genus 0, 1, 2, 3. This is the next three figures that are going to appear. Genus 0, 1, 2, 3. And I'm going to do this at three different points in modelized space. There's a point there which is cycles 2. And there's two others. I don't remember their values. They're just complex numbers. And so these three points are these three columns that I'm going to show you here. And you can already see that there are some dots that are just spot on on top of the lines. The dots, basically I have blue dots and green dots. They just give you the real and imaginary contributions to that guy. The dots are coming out of the sequence. And I'm testing that function with the lines. I think it's pretty convincing the data is telling you that this was the correct way to compute the one instant times. We can do the same for the two instant times. Now the function is more complicated. And by the way, there's also two contributions. I don't want to get too much into that. But at two instanton level, there's basically the contributing with the same weight to large order. There's the second instanton of conifold 1. And there's a mixed contribution of conifold 1 and conifold 2. No, I'm sorry. I didn't mean conifold 2. I mean, conifold 1, it appears with two different powers of the instanton action. There's the plus and the minus. This is the picture that I've showed you before, where there was the green arrows and the brown arrows. They both contribute. There's another function appearing there. These two terms are testable out of this sequence, which if you want is the large order of the one instanton that we have already checked. And these terms appear here and here. And in the next figure, at the fixed point of mod life space, and varying propagator, I'm showing you some tests. This is the test of the two instanton conifold, and this is the plus minus conifold. I think, again, the tests are very convincing. That also the two instanton contribution is coming out straight out of this. So we know that these methods to generate for you the non-perturbative completion of topological strings using this non-perturbative version of the holomorphic anomaly equations seems to be generating a trend series which completely agrees with what is expected from a synthetics. So now the second question is, can we use it to resum and to actually compare against something? Let me just, I think maybe this has been a little bit dense, go back to one of the first slides, where I was telling you how to do resummation. So basically, there are several steps I have. I start off with the perturbative guy, then I try to do this Borel-Pade approximation, but I have to do Borel-Pade of all the instantons, or as many as I can numerically, and put them back in the trend series. That's what we're going to do. Let me find just some index to go back here. What am I comparing here? So first of all, there's this massive amount of work by markers and collaborators on giving non-perturbative definitions for local P2 and other examples, string theory in these backgrounds. These definitions come out of quantizing the mirror curve. So basically, they start with the mirror curve, they construct some, if you want, Schrodinger-like operators, and things work by. I'm not going to have a lot to say about that. What I want to talk about is the comparison from the results that they have obtained with the suggestion that the trend series is also capturing with a specific semi-classical decoding that Mark has already talked about in the morning, the same phenomenon. And here, we see in black, the black line is what comes out of doing the resumption of the perturbative alone of the F0s using Borel-Pade of the perturbative. And in red is the exact results. And you say, great, we're done. There's nothing else to be said. That's not true. I mean, it looks really nice on the plot, but if you look at the numbers, there is a consistent mismatch to all of them. So the perturbative is not the whole story. So what can we do to check if that mismatch is actually captured by the trend series or not? We can check the difference between this mismatch and what the trend series tells you that the one instant on contribution should be. This one instant on contribution, if you want, is what I'm expecting to be. It's the next guy in the trend series that I'm expecting to give the correction. Here's the comparison. By the way, this is taken from different papers. So now I have here lambda, and I'm sorry, n and h bar. But h bar is basically g string. And if you want, n is lambda, which is the flat coordinate. I am sorry. This is different paper, so we use different notation, but the idea should be clear. What we're doing here is we're comparing this difference. And if the difference is good, it has a dot. The different colors in the dots just means how much numerical digits am I matching against. And if the comparison is bad, we have a cross. And you say, OK, this is not so good. It's good somewhere. This is not good somewhere else. But you can ask, what's the boundary that separates good from bad? And that curve is precisely the stokes line for the second instant on sector that I talked about previously. And so you'd say, OK, I only expect things to match if I take stokes phenomenon into account. What happens upon stokes phenomenon? It's not a big surprise that things match once I do stokes jump of the trance series coefficient, things match on the other side. So in fact, not only this method to construct a trance series is validated by a synthetics, but very nicely matches against a large body of work that markers and collaborators have achieved. So this is very nice. You can ask if you can go further. Here's an idea of something which would be nice. We haven't done this, but it would be nice to look at, which is the case of strength theory on local p1 cross p1. Because now there are two distinct non-perturbative definitions in the literature. One is the one that we've used also for local p2 of markers and collaborators, quantization of the miracle. The partition functions, here is the partition function, they came out of this expansion here, how for the freedom determinant, where this rho is just the inverse of the quantized miracle. But there's another definition based on large enduality that also the partition function of strength theory in that background should be given by Trun Simons on a lens space, which localizes to a two-cut matrix model. So there are several interesting questions here. One is to ask how much do these definitions differ? That's an interesting question in itself, but in some sense doesn't have a lot to do with resurgence. The other question is, what happens if I construct a trance series? But by this method of the non-perturbative homomorphic anomaly equations, I would expect to construct a single trance series. So can the same trance series match both results? The naive expectation, again we haven't done this, I cannot give you a definite answer. But the naive expectation would be that you could think of the trance series as sort of a general solution to the differential equation. And the non-perturbative definitions as different choices of boundary conditions. So by choosing different trance series parameters, I am matching against one or another of these definitions. This is only possible, of course, if how much of these definitions differ is by exponential different terms where the exponential is controlled by the instantonaction. And then the choice of different trance series parameters would amount to different semi-classical decodings via the trance series of these two definitions. In the spirit of, oops, there goes the iPad again. In the spirit of what Marcus was saying. All right, let me just give you one minute thing that, what happens in the A model? The A model gives you grommov with an invariance. Grommov with an invariance should see some trace of this G factorial growth. And it so happens that you only see this trace if the genus and the gray grow at the same time. Not gonna say anything else because I think I'm overdone with time. So I hope I've convinced you that these things are useful in physics and more generically here, that was what I was asked to in string theory. And the observables seems very clear that they can be described by research and function. So we can construct them non-perturbatively starting out of perturbative theory data. And, but of course there's many things that still can be done. I mean, besides, I will just give you a comment on one thing, besides applying this to many other examples, I didn't say anything about Stokes constants. I didn't say how to compute them. And in fact, this is a pressing question because in most examples, we can only access it numerically. I mentioned this in passing. It would be very nice to have some first principle approach to computing them, but I would say that that's one of the big open problems at this stage. Thank you very much. Are there any questions? String theory applications? Are there some cases where you can analytically compute the larger the behavior of the series that you generated at zero? Yes, yes, there's this string theory on the conifold. This is a very simple example. You can compute the function exactly and then you can also know all the terms in the large other expansion and compute all these things analytically. That's a nice example. I have a question about your assumption that at the beginning that the resurgent function was simple, so you had limited singularities and simple poles. Does anything you say that towards the kids when you have a fraction of power or singularities? Right, so there's one thing that I should say is that this class is not as restricted as it seems because what happens is that you can consider, if I just write down for you the perturbative series, we'd say with some coupling, GS, I can say well why don't instead of looking at this series, I look at GS to the power of five-halves time that's it? And then it turns out that if you do that, you get a Borel transform which is not gonna have logs. It's gonna have square root branch cuts or whatever, but it's gonna be related by fractional derivatives to the one with logs. So somehow, instead of thinking of, there's only one way to reach the simple case, there's many ways, so basically it's a representative of a lot of different possible Borel transforms. So it's a rather large class. I'm sorry, I'm not. What could you say about applicability or these ideas to normal ones? Well, for normal ones, I really wouldn't know what to do because you need a semi-classical description. That's something which is required in all these. There's mostly classical description. Right, so that's why I'm saying I wouldn't know what to do. In which of the given series, in which of the given series which could do with the expansion as factorial behavior? Yes. How would the code, how do you know? Then you can extract some information of what these would be semi-classical description would be out of a sequence by using these sort of tests that I illustrated for the topological string case. So basically you just, you can extract the instanton action, what would be the, let's say renormal on action, you could extract what would be the one loop around the one renormal on sector. You could extract these numbers, but at least I wouldn't know what to compare them against because I still lack some semi-classical theory telling me what renormal ones are. But certainly out of the sequence I could extract some data. But there's another problem which is it's very hard to generate data with Feynman diagrams. In these cases it's much easier to generate because we have recursion relations that allow us to generate a lot of data. But there's another thing which could happen. You might have some loopy-doubity, what I call loopy-doubity of corrections which are not visible, loopy-doubity. You could have. Just to give you an example, it's amazing to have some theory which are classical level, have some theory like chiral symmetry. Can you know loopy-doubity of chiral symmetry is wrong? Yes. There is no way to do with your real g-rains of correction. So, in some sense... Right. Well, look, I haven't done this example. So I don't know how much you could extract out of looking at these sequences. I know some things you could extract, I'm not claiming you could extract everything. But would Feynman just summarize what you were saying with these techniques so it works fine as soon as he knows when your corrections comes only from the instant ones? Well, not necessarily. Some of these objects we wouldn't have also an instant on description from them. At least I would enlarge this claim to instant on-like objects. There are some objects that we find here that we know that they're there, I didn't talk about them, that we don't have a semi-classical description for. They're not renormalons because these are theories without renormalization, but we don't know what they are. So I wouldn't say that instantons is just the only thing you can see here. You can see more than that. But in the case of the instantons, the question which somehow I didn't understand from the presentation, there's always a degree if you have a chosen regression control. There's always a degree if you can choose a regression control with variolithicals. So here it's translated to this choice of the trans-series parameter, if you want. And one way that you can do this is that you have some say in the case of topological strings. We have the non-perturbative definition that Marcus gave us. We test it at a specific point to fix the trans-series parameter and then we just enlarge and go throughout modelized space and see if we want to do a measurement and we use that measurement to then check that everything is consistent everywhere else. Thank you very much.