 This talks about the landscape of vacua of string theory, and Mike laid out a founding vision for that field. And he's contributed to really most of the key technical developments over the years. I'm going to explain how we found incarnations of Mike's ideas, ensembles of flux vacua. So I first met Mike as far as I recall at Les Uche in 2001. I was a first-year graduate student, and I was a little bit worried that I wouldn't be admitted to the school. But I went into Steve Schenker's office and asked him about it, and he said, don't worry. I'll write to my old buddy, Mike Douglas, and he'll take care of it. And it got taken care of somehow. This is a really formative experience for me. It was my first opportunity to meet most of the year, well, this portion of the European string theory community and others from further afield. Also, many of you here, some of the current organizers, in fact, were organizing then. And during this meeting, I remember Mike giving me an extremely patient one-on-one explanation of the derived category description of D-brains, which he was thinking about at the time, and was very far above my pay grade then as now, but I remember how kindly and carefully he explained it over really a long period. So that made quite an impression on me. Now, his lectures at this school were outrageously ambitious, because what he tried to do was to, as it says in the write-up, anticipate and lay the groundwork for the Third Superstring Revolution. And what he said was the Third Superstring Revolution was likely to involve vacua and vacuum selection. He said, many other things besides, but that was what I took away from him. And so two years later, you wrote this paper, which again set out a vision for the field. If you can read that, it says, we initiate the study of ensembles of effective Lagrangians, the idea being one should try to understand the statistics of vacua and string theory and learn how to make predictions from that. The next year with Frederick, he got a lot more specific and gave results for the distribution and number of vacua in flux compactifications of type IIB string theory, which is the topic today. So in these studies of the landscape, there's a microscopic and a macroscopic approach. The microscopic approach is enumerative, one constructs individual solutions. And on the statistical side, one tries to study general properties for universal patterns. Now my first encounter with Mike's work was on the statistical side. He led the way in both, but for me, what I grabbed hold of was his counting of critical points, the paper I showed with Dennef. And they had laid out this very beautiful set of statements about the characteristic distribution of critical points. And in 2011, with Mara Shanraza, I tried to understand how many of those critical points were in fact minima. We still did this statistically, not enumeratively. We learned something that I thought was interesting. I won't be talking about it today, it's not related to today's subject. But we reached what I felt was an end of the road there. It didn't seem possible to me to go much further than Mike had gone, maybe we added a little bit, but go further than there in the statistical approach without devolving into progressively epicyclic modeling of our own assumptions. Like the first few cuts were very clear and right. And then after that one had to think for a long time or get new data, somehow revitalize the subject by having data that could be used to come up with statistical models. So this is something that Mike himself had stressed to me often when we spoke in 2011, 2012, 2013 about those works. Essentially he kept asking, well, how does this really work in real clubby hours? Is this how it goes? And I didn't know the answer. We thought we ought to work on it. Now the regime where one was particularly lacking knowledge is the regime of a large number of moduli. So the statistical arguments that Mike and Frederick had made were arguments in a so-called continuous of flux approximation, mostly where the flux quanta are essentially continuous. And related to that and not identical to that, the number of moduli fields was some large number, an expansion parameter. So for example, our results were matrix model results at large n and the n was the number of moduli. As you'll see part way through the talk, this presented a pretty big obstacle to enumerative work. I can already anticipate that now if you're trying to solve specific problems in cases where the moduli space dimension is large, your task is harder, right? And so this was the trouble ahead of us. From 2014 on, we built tools to study this case. And this talk is about one set of solutions that resulted from those efforts. The larger program is constructing corners of the string landscape. So I'll just be showing you one corner. Okay, so just a little bit about the physics motivation or one physics motivation for this line of work. As we all know, the cosmological constant problem is a severe crisis, has been for a very long time. The simplest explanation for the observed universe is that the acceleration is caused by vacuum energy and the problem is why is the vacuum energy so small? Rather than thinking specifically in terms of high scale cutoffs, I find it useful to talk about it at least in terms of the idea of the theories of small fundamental objects characteristically produce small dense universes. If you have a theory made of strings, then until you've done much else, you ought to expect the universes that result to be string size. This is of course the question of scale separation that Alessandro was telling us about. So we ought to ask why is our universe exponentially larger than its constituents? And this problem is far too hard. I have nothing to offer on this question. That's the real cosmological constant problem. But here's a question which is pretty close to it on which I can offer something which is how can one find such a universe? Namely how can you take a theory of small constituents and exhibit an exponentially large universe? This is literally the problem Alessandro was talking about. How can we exhibit scale separation in solutions of string theory? Now that won't necessarily explain why the universe in which we live has this property but it's certainly a first step, I would say a necessary first step to show that our theories at least have the capability of producing exponentially large universes. Now the Holy Grail in this subject, perhaps you can't read it because of projection effects is to consider solutions of string theory with small cosmological constant. And that's absolutely not something that has been achieved but there's been progress and I'll be reporting on progress in this direction. The progress is that we've found small negative cosmological constants in string theory. We've found a class of a vacuum of string theory in which the vacuum energy is less than it couldn't even be much less than the observed value of 10 to the minus 123 in plank units and yet the internal space is modest in size. So these are universes in which the radius of curvature is larger than the radius of curvature of the decider universe we inhabit. And they have hierarchical scale separation. We have examples in which the ADS length over the collusacline length is bigger than 10 to the 100. The vacuum energy is negative. Okay, these are anti-decider solutions and so they don't describe our universe but they do give a new angle, maybe a hope of a new angle on the cosmological constant problem. The mechanism is polynomial fine-tuning of topological parameters and the purpose of this talk is to explain the construction and how does one achieve polynomial fine-tuning of topological parameters in order to get exponentially small vacuum energy. I should say this is work, I'll describe in a minute but in case you can't read that this is work with DeMertis, Kim Moritz and Rios Tuscon in 2021. Okay, so let me give, let me not make you sort of guess throughout the talk what the main claim is gonna be. Here's a summary of the main claim. We find solutions of type 2B string theory of the form ADS4 cross X6 with X6, a collabial or antifold. The solutions preserve 40 N equals one supersymmetry. They have no moduli. The vacuum energy is exponentially small in some cases smaller than 10 to the minus 123. One doesn't have to be fixated on that number. We find plenty of solutions in which it's 10 to the minus 20, say, but this is possible. And the mechanism is a racetrack. I'll explain what a racetrack is of world sheet instanton contributions in the mirror collabial threefold. And as I will explain modest integers, Gopukumarvafa invariance and flux quanta get mixed together in a way that exponentiates them and that gives the vacuum energy. So as an example, these numbers are not made up numbers. I'll find two 252 and 58 in an explicit example and their ratio, so two over 252 to the 58 is about 10 to the minus 122. So that's the kind of equation at the heart of this. The work, of course, is to show that you can actually get these numbers in this arrangement in a solution of string theory. In an EFT, this is a very easy thing to write down, but it's sort of an absurd thing to write down. Okay, so this allows us to find exponential hierarchies with polynomial effort, right? So I only have to work polynomially hard to get these kinds of numbers and then I get something like that. That's the game. Now, these models are certainly not realistic. First of all, I've already said that they have negative vacuum energy. Now, the Kaluza Klein scale is high, very high. 10 to the minus 3 M Planck or something like that. No problem with that, but some moduli are ultralight. Some of the moduli have masses 10 to the minus 33 electron volts, say. So it's just not realistic. Now, an uplift to decider, which we're certainly thinking about, could in principle give vacuum energy plus 10 to the minus 123 M Planck to the fourth. And I claim even could give that in a way that's sort of believable and demonstrable, but this would be with moduli and supersymmetry breaking scales that are ridiculously small. So that's not the real cosmological constant problem, which is to get small cc after supersymmetry breaking at scales at LHC level or above. And I'm not making any claim whatsoever about that. I wanna be absolutely clear. What we're solving is more of a sort of supersymmetric cc problem, which is to show how an exponentially large supersymmetric universe can arise in a theory with a small fundamental length scale. Any questions about the summary and goals? Okay, so these works changed at least our picture of how accessible such solutions might be in the string landscape. So here are the people who did the work, Mehmet Demertis, Manki Kim, Yaakov Moritz, and Andres Ries-Tasco. We're in the first paper on this with me. And then we all together with Naomi Gendler and Richard Nelly are doing a lot of follow-up work currently. Okay, so how do you find vacuous? So a compactification of a string theory on a six manifold that preserves 40n equals one is characterized by a super potential, which is a holomorphic function, and a scalar potential, which is not, that depend on some moduli fields. And because the super potential is protected by non-renormalization theorems and because it is holomorphic, it's effectively knowable with enough work one can really understand it. The scalar potential in the present era is not. It's poorly understood beyond one loop. And the scalar potential is determined by both. You need to know both. So what can you do? Well, the general strategy throughout this subject is find compactifications where you can compute the super potential. Use the known super potential and the leading order of scalar potential to find vacuous in a parameter regime where you can show a posteriori that it's consistent to neglect corrections to K which you don't know. One can never by such means rule out conspiratorially large corrections to the scalar potential, but you can rule out invalidation of such vacua from any reasonable kind of correction. Okay, so that's the game. You find a corner where the knowledge that you have might suffice. So let's actually do that. We're gonna think about type two B string theory on an oriental fold of Calabia three-fold. The moduli here are the Axio-Delaton, which is complex. H21 complex structure moduli ZA and H11 scalar moduli TI. So H21, H11 are the Hodge numbers of the three-fold. And we're gonna choose quantized three-form fluxes, S3 and H3 and determine thereby a flux super potential. The flux super potential after much work by many people has been sorted out. Well, the original shape of the flux super potential was written down by Guca Fafa and Whitton in 1999. The flux super potential itself takes the form of an integral of the three-form flux wedge, the holomorphic three-zero form of the Calabia. And that can be written as a polynomial in the moduli fields plus a sum of exponentials in the moduli fields. So let's talk about general structures first. Non-perturbative terms in the super potential come from D-brain instantons. And these look like a sum of Fafian prefactors AI that depend on the moduli in general, times exponentials of the scalar moduli. Okay, so this is the general structure. And in principle, if you had lots of examples where you knew all of these data, then you could just go to town and try and find minima and ask what properties do the minima have? Do you find interesting vacua? But that's too hard, it's really too hard now. What we do is simplify things. We find conditions under which we can prove that the polynomial part in the flux super potential is zero, exactly zero. So we're gonna solve a diophantine equation that guarantees that the polynomial terms in the flux super potential are zero. We will furthermore ensure that the sum of exponentials that follows is the sum of two terms that compete in a racetrack, I'll explain that means, plus demonstrably negligible terms. And we will ensure that the prefactors here, which in general are functions of moduli, are not. We'll ensure that they're constant numbers instead. So these are three massive simplifications. I wanna stress, I'm not assuming that they happen, I'm gonna show you that they happen. I'm gonna find topological conditions that ensure that they occur. And when you do that, the whole super potential is a lot simpler. Now the full super potential for the whole system is a sum of two exponentials and a sum of a whole bunch of exponentials in the scalar moduli but with constant prefactors. And so the only moduli fields here are tau and the tis and there's just some numbers. So this is now a pretty well understood thing and note that it only involves exponentials. Now the structure that we wanna make use of is obvious. If the super potential only has exponentials in it, then minimization of the scalar potential that results is going to be exponentially small. So that was the only task. But how are you gonna make sure that you have a minimum rather than a runaway? Well, that depends on what the numbers are, right? And so we have to show that we can compute the numbers and find examples of those numbers, the various prefactors and exponents, such that the minimum doesn't occur at infinity but occurs at some finite place with desirable properties. And so we'll do that. And with this structure, we're gonna find that the vev of the super potential, so I'll periodically use this symbol, w not, it's just the expectation value of the super potential. We'll show that it's exponentially small. So you should think about that as setting the scale of Susie breaking really low compared to the string scale, and that gives us lots of control. Okay, so that's what we're gonna do. Now, how are we gonna do it? Well, really we're inspired by this paper building a better racetrack that Mike wrote in 2004. And so what is a racetrack and why do we need to build a better one and what was the idea that we're drawing on from that work and related works of Mike? So to think about a racetrack, consider a four-dimensional n equals one field theory with a super potential that comes from instantons. And let's say the theory just has one complex field z in it. And suppose the instanton super potential is the sum of some decaying exponentials as written there. So here the n's are some real numbers and the p's are some positive numbers, okay? So some decaying exponentials. And let's look along the real line, okay? Along z, real z. This thing decays as real z goes to infinity, right? Because I'm insisting that the pre-factors p and the exponent are positive numbers. But you can easily convince yourself that if you just take this as a problem in single variable calculus, so minimizing this function along the real line, you find that you can do so and the place where the minimum occurs at the value z min where the minimum is found, the expectation value of the super initial is given by this function, okay? It's some trivial exercise. But the thing I wanna draw your attention to is that if p one and p two, the two exponents are much smaller, if their difference is much smaller than p two, and if the ratio here n two over n one is small, then you have a small number raised to a large power and you have an exponentially small expectation value. That's the racetrack mechanism, okay? This is an age old idea. All we're gonna do is realize it in a string compactification, really do it. So what we need to show is that, okay, the flux super potential is a sum of exponentials with these conditions that the exponents are quite close to each other and the pre-factors are a little bit hierarchical, okay? So now comes the one fairly heavy slide where we do all the work of setup. So let's set our notation. We're gonna write down a symplectic basis of h three. That's where we're putting the fluxes. Then we can define the periods as the integrals of the three zero form over such a basis. The periods contain information of coordinates on the modular space and the derivative of the pre-potential with respect to the coordinates. And the pre-potential on general grounds takes the form of sum of polynomial terms and a sum of instantons. The polynomial, and I'll tell you about those pieces in turn. So the key technical advance, the thing that allowed us to do the work we have done is being able to compute the pre-potential via mirror symmetry. And if you followed mirror symmetry, you may say, wait a minute, this is something that was done in the very early 90s. This is what mirror symmetry is for as a practical matter. It's job is to compute pre-potentials for you. Our contribution is just doing this in three folds with many moduli. If you think back to the beginning, I was saying one often needs to work in cases where the number of moduli is large, the heroic works of the early 90s worked with number of moduli one and two, okay? And one hasn't gone very far beyond that since. So we'll go up to very large numbers. The computation is purely geometric. The essential idea is one has to compute the so-called fundamental period. So here, sigma three zero is a distinguished three cycle in a Kalabia three fold. Why is there a distinguished one? Because these three folds will be hypersurfaces in toric varieties, and in toric varieties there's a distinguished torus. So I can make a distinguished three cycle. If I integrate the three zero form over a distinguished three cycle, then by some relatively well-known techniques, you can extract all the periods. The periods are after all just the integrals of omega over various other cycles. Now, that's all well and good, but that gives you the periods in a real basis, and one needs them in an integral symplectic basis. And for that, that's the only place where we really use mirror symmetry. We use the known integral structure of the mirror three fold, the fact that we know the intersection numbers on the mirror side. And this is what we get. So the polynomial part, we don't need to pay much attention to it in detail. It's just a cubic polynomial plus dot, dot, dot, where the Z's are the complex structure module. And all the stuff in different colors are topological data of different kinds of the mirror. Intersection numbers of the mirror, churn classes of the mirror, Euler number of the mirror. Okay, but the instanton terms are gonna matter. So let's look at these for a minute. The instanton sum is a sum over curve classes in the Mori cone of the mirror. So these are two cycles in the mirror, effective two cycles in the mirror. And it's a sum over such two cycles of a number and sub q tilde, that's the genus zero Gopukumarvafa invariant of the mirror times a trilogarism of an exponential of the moduli. Okay, so this is slightly complicated. Let me remind you, if you haven't thought about these things much or much recently that the sum over all curve classes of a GV invariant times a trilogarism of this exponential is the sum over all classes of the Gromov Witten invariant times just the exponential. So the Gopukumarvafa invariants and the Gromov Witten invariants are just related by a resummation that turns exponentials into trilogs of exponentials. The GV invariants are the nicer packaging for our purposes. There's some integers, there are BPS indices, there are good objects to work with. And they'll drive most of this computation. Okay, and so how do you go forward to a flux super potential? Well, the flux super potential is the integral of G wedge omega, this is G. But if you know the integrals of omega in a basis and you know the integrals of G, well F and H in a basis then you can write this pairing in terms of just an appropriate symplectic dot product of the flux integers and the period vectors. Okay, the pi vector is the integrals of omega over a basis. The F and H are the integrals of the fluxes over a basis and sigma is a symplectic matrix that pairs them together. This is how the super potential is related to the data of quantized fluxes and periods. Okay, so what's our work gonna be? We're gonna pick fluxes, compute periods, pair them up and see what we get. Now the structure we wanna exploit in here is that the flux super potential can be expressed as the sum of perturbative and instantonic terms where by perturbative flux super potential, I mean the stuff that comes from F poly and by instantonic part of the flux super potential, I mean the stuff that comes from F inst. In type two B string theory, nothing on this slide is quantum mechanical. It's all classical. But viewed from the mirror, these ones, these instantonic ones in type two A on the mirror these terms are quantum effects, roll sheet instanton effects. So that's why I'll keep calling them instantonic but remember it's just an expansion of a classical result. Okay, so what's our game? Our game is gonna be to find fluxes such that the perturbative part of the super potential is identically zero along one complex direction in the modular space. We call that a perturbatively flat vacuum. So we're gonna try to find perturbatively flat vacuum. Choose fluxes to get flat vacua. And then having done so we're gonna find sub cases where the remaining instanton terms, so all that will remain will be instantons. We're gonna make sure that the instantons that remain form a racetrack. So let's see how that happens. I'll just give you an example. So here's an example where the number of complex structure modulize five, the number of scalar modulize 113. This is what I meant by large numbers, not the five but the 113. And we find quantized fluxes here they are. So nice reasonable integer numbers of fluxes such that along, of course I said along a particular direction in the moduli space, along a direction where the complex structure moduli vector is proportional to the axiodilaton by this rational vector. Okay, so that's just one relationship, one linear relationship in the six dimensional joint moduli space of the five complex structure moduli and the axiodilaton. Along that direction the perturbative part of the flux super potential vanishes exactly. This is what I'll call the perturbatively flat direction. Along that direction it vanishes because we're working along one particular direction I can use tau as my coordinate. So you shouldn't see any Zs anymore, everything can be expressed in terms of tau. It's a one dimensional problem which is nice. So what remain along that direction are two-way rule sheet instantons and the leading ones it turns out you do the calculation have gv invariance minus two and 252. So the flux super potential takes the form minus two this exponential plus 252 that exponential plus a sub leading term. I write this one not because we need it, the black one, but because it's the next thing in line and it's much, much smaller than the ones in red when one is a weak strain coupling. So what's with the 729s and 728s that looks like an absurdly close ratio? Well that's what we engineered. That's why I chose these numbers here. I chose these such that the exponents here would come out quite close and they did. When you solve with this flux super potential you find that the modular stabilized g string equals 0.01 and w naught, the VEV of the super potential is two over 252 to the 29 which is 10 to the minus 62. Okay, so very small. John. Are you dealing with n equal one or n equal two super? Very good, dealing with n equals one and what we're living off of here is that the, you're wondering why did I talk about a pre-potential or what is it? I just think of putting 2B on a Calabria 3-fold I think is n equal two. It absolutely does. Yeah, so this is an oriental fold of a Calabria 3-fold. Yeah, yeah. So we're gonna have an 0307 oriental fold. I won't show you exactly where the 03s and 07s are but that they're somewhere specific and what I'll have to do is argue that what we'll have to examine is when we start studying the scalar potential how big are the n equals two to n equals one breaking effects and how much uncertainty do they inject into our knowledge of the scalar potential? So that's quite critical. It turns out that the O-planes can be arranged in a good way but yeah, that's a serious, a serious issue. Yeah, your point is well taken. All this geometric stuff is n equals two stuff but we're ultimately talking about an n equals one super potential and scalar potential. Great, okay, other questions? Yes. Will your orienti-forms, they are localized? Yeah. They're real oriental, they're honest to God oriental. They're on the involution of the first torque coordinate goes to minus itself or something like that. We know exactly where they are and we can check their intersection numbers with other things. They're close to the back reaction. Absolutely, yeah. Well the nice thing is all of my oriental fold, all of my 07 planes will come in SO8 stacks, 47s, 107 and so the back reaction is exceedingly mild there. There's just a minus one in SL2Z monodromy around them but there's no net charge. The net seven brain charge is locally canceled and so life is actually very good. They're precisely localized. Yeah, had we had general seven brain configurations, one would have to work actually quite hard to argue that there aren't substantial corrections. Which one is a small parameter like this? Yeah, here it's that, so if you look at the terms in red and try to find a minimum, you find the minimum occurs at G string is 0.01 and then so tau, m tau is 100 and or tau, rather tau, m tau is i times, m tau is 100 so that you get an e to the minus two pi times 100 and that's why this suppression, since this is a larger suppression than 0.728, it's profoundly suppressed. Now, if we only knew it was plus dot, dot, dot and I couldn't compute the next terms, then you might wonder what if the next thing is awfully close and has a rapidly growing coefficient or something and I'll show you stuff about that later for the killer potential where one had to check that. Here, we can just compute as many terms as you want and it's clearly very exponentially controlled. Other questions, comments? Okay, so this is good. So in this example with Hodge numbers 513, the numbers that show up are, I'll call those 729s and 728s, right? That's the racetrack that shows up. We find some other examples in the paper like 34 over 280 and 35 over 280 and stuff like that or my favorite 32 over 110 and 33 over 110 and we work these out and tested in full detail in the paper as complete examples. You can look at every piece of them, we give the data in the archive posting, et cetera and now we can easily generate vast numbers of other things like this. We can generate them roughly a million per core hour right now if we want, but we saw no reason to write them up so they're just piling up in our hard drives. So we just brought out five that we thought were very nice. Here's the example, but if you want more, we can make you lots more. Okay, so the story so far, there's a general form of the super potential and we chose flexes such as the polynomial term in the flux super potential vanishes, that's the perturbatively flat condition. So that's the trace of flexes. We then having done that made sure that the Kalabiow that we had chosen had nice GV invariance such that the remaining terms in the flex super potential made a nice racetrack and then that brings the super potential to the form of a racetrack for tau plus an sum of exponentials in the scalar moduli. So what about the scalar moduli? Well, we still need to ensure that there are at least H11 of these. So if there's H11 scalar moduli, there better be at least H11 terms in this sum. So at least H11 of the Fafians better not be zero. We're gonna do something stronger just to be even safer. We're gonna make sure that at least H11 of them are non-zero and they're numbers. They're not functions of moduli, so they can't vanish anywhere in the moduli space, they're just non-zero numbers. I won't bore you with that. If anyone's curious, love to talk about it afterward. That was a whole piece of work being able to check that, but we checked it. Yes, Alessandro. Sorry, there's something I didn't quite catch. How were you able to handle all those moduli? Are you just better with computers than people who came before you? I mean, what was the... The 113? I'm not better with computer than anybody. I have some students who are. Yeah, I will explain. That's actually, that's the part we're almost coming to. So yeah, that's the idea behind the... There's a collection of ideas behind it. As you'll see, it's a combination. I won't maybe explain it in enough detail here to... Well, but let me address that point of was there an idea? Roughly speaking, the idea was that general-purpose computational geometry software does not fully exploit the structure in toric varieties. And if you know the structure in toric varieties and you design specialized computational geometry software that does exploit it, then you can turn problems that were exponentially costly and general purpose to polynomially costly, and that's what we did. It required some people who were very good with computers, but fundamentally what was required was a structure to exploit rather than just slightly more efficient code. And that'll become clear when I go through this in a minute. So indeed, it's a question of manufacturing. So what do we have to do? Begin with an oriental fold of a Kalabiow. Compute the topology of the divisors, make sure there's enough of them that have, what we call pure rigid, that have constant non-zero fofians. If this test fails, reject. Find another Kalabiow, there's lots. If it passes, compute the pre-potential via mirror symmetry, that's what I've explained, and find quantized fluxes that align with the GV invariance of the mirror to give a racetrack without going past the consistency condition set by the tadpole. And step four, finding the fluxes is a search in a lattice. It's a search in Z to the twice H21. So in the example I gave where H21 was five, that's a search in a 10 dimensional integer lattice. And that's actually not ridiculously easy. A brute force search is feasible up to maybe five or six on a laptop, and maybe seven to 10 on a cluster. One can do better eventually, but this will be enough for now. So what's the setting? So we're gonna work with mirror pairs of hyper surfaces, X and X tilde, in toric varieties V and V tilde, obtained from triangulations of four dimensional polytopes, delta-circ and delta. And there's 473,800,776 four-dimensional reflexive polytopes as found by Kreuzer and Skarka in 2000. Here I've just shown the 16 two-dimensional reflexive polytopes. And then if you go about triangulating these things, we proved in 2020 that there could be at most 10 to the 428 kalabia three-folds resulting from the list, but that'll be enough. So what we did is write a software package. This is with my students, Ahmed Demirtis and Andres Rios Tuscon, who did all of the writing of the code, a software package for analyzing kalabia manifolds, call it CY tools. And it's designed to go beyond what pencil and paper constructions and other software has gotten stuck at. It's purpose-built to analyze triangulations and the associated kalabia manifolds, especially in the previously unexplored regime where the number of modulized large. And this has allowed us to access the whole range of three-folds in the Kreuzer-Skarka list. So we had a long series of works building up to this, but eventually bore fruit. So here's the famous plot of hodge numbers of kalabia three-folds from the Kreuzer-Skarka list. And the complexity of analyzing, let's say, X, the one for which this is H101, grows exponentially as you go this way. And the complexity of analyzing its mirror grows exponentially as you go up this way. And grows exponentially, I want to stress that if you work with Sage or Macaulay 2 or something like that, most of the stuff you can do stops when any of the hodge numbers is like four, or maybe 10 if you really push it and specialize it. Okay, but these numbers go up to 491 on both axes. So with off-the-shelf stuff, one can work in these very faintly visible shaded bands, but we had to really handle the whole space, learn how to compute GV invariants for the whole list and the like, and we can do that. So where the search for flex vacuum is feasible, which is when H21 is not that big, so the lattice you're searching is not very high dimension, each one is typically large, like 100 or something. So we have to do basically everything at large H11, the oriental folding, finding rigid divisors, uplifting to F theory computing GV invariants, all that stuff, but what with CY tools, we can do that. And so we did it, we searched for vacua. So so far we've shown that super potential takes this form, an exponentially small constant plus a sum of non-zero constants times exponentials, the CI or some dual-toxider numbers, and this is the leading order scalar potential, it's a function, it's a log of the volume as a function of the scalar moduli. Now we wanna solve the F flatness condition, we're trying to find super symmetric vacua, and it's easy enough to write down what that condition looks like at leading order in our expansion, but you actually have to solve it. And so where in the scalar cone is the solution or is there a solution? This actually also required some cleverness with programming, because here's a picture that's literally rendered yet, it's coming in, each polygon in here is a different phase, so a different triangulation of a polytope corresponding to a different collabial, and you can think of these as different chambers of the extended scalar cone, and we might start, for example, here and the vacuum might lie there, and you're certainly not gonna find it by checking through all of the exponentially many different chambers, but my student thought of a clever way of sort of taking a bearing to the right point and just marching through the space until you get to the solution. So this is, again, exploiting structures inherent in the problem rather than just sampling faster or something like that. Okay, and you get to a vacuum, and here it is, here's the vacuum, for the same example, so in the example where I showed you the fluxes and super potential before, this is a lot of stuff on this slide, but let me just show, let me point to the parts that are important, this slide contains all the information one needs to analyze the vacuum, here's the polytope, each of the columns is a vector in Z to the fourth, so that defines some lattice polytope, triangulate it, choose fluxes like this, you get this super potential, and this W naught, which I've described before, and when you go ahead and solve, you find that the supersymmetric ADS-4 vacuum that results has volume 945 in string units, so not that small, not that big, and the vacuum energy is 10 to the minus 144, and plank to the fourth, and this number two over 252 to the 29th or to the 58th is what's coming in here, okay. And the volumes are decent, the Einstein frame volumes of things are reasonably sized, the volume of the three-fold in Einstein frame is 10 to the 5th, so no problem. All right, and maybe a useful way of remembering at least how we think about this is you can put all the data you need to specify the solution on a T-shirt, right. You just say, this is the polytope I want, this is a triangle to specify the data of a triangulation, these are the fluxes I want, that's enough, it's fully determined, these data already given determine for you the GV invariance along a flat direction, and from that you can compute the scalar potential and you can see what things look like, so that's sort of fun. Now we can ask and what I'll ask, how much time do I have? Five big minutes, okay. What's that? Eight minutes, okay great. So let's talk about control here. So the control of the super potential is actually really good. As I promised at the beginning, it's holomorphic, it's determined by topological data, not that hard. We've ensured that the prefactors here are non-zero numbers, we can't compute those numbers, Alexandrov, Sen, Stifansky, Ferrat, and Kim are working on it, but, and maybe they'll succeed in the near future, but what we showed is that our vacua persist unless these numbers are absurdly small or large. So I think that's okay. We ensured that other instanton contributions to the super potential, Euclidean D3 brains or Euclidean D-1 brains are negligible, but now let's talk for the remainder about the part that's a little bit more interesting, a little bit harder. How do we control the scalar potential? So the Einstein frame volumes are large, but the string frame volumes are not that large, they're actually order unity, and the reason is that the large parameter making Einstein frame volumes is big, was one over log of W, is log of W not inverse. It's the same thing that makes G string small. So when you multiply them, unfortunately, the parametric stuff cancels and you get volumes of order one or two pi or something. Now, the saving grace here is that at weak string coupling, all the effects, this comes back to John's question about N equals two, all the breaking effects from N equals two to N equals one come from brains, O planes and flexes, and those things are all suppressed by factors of G string. So when G strings tend to the minus two, to very good approximation, the scalar potential that one cares about is determined purely by curvature corrections that can be computed in the N equals two theory. And so the scalar potential to excellent approximation is given by this kind of expression that you can read off from the mirror. It's intersection numbers, a constant correction, and then some horrible sum of trilogs and dialogues with GV invariant prefactors, but the point is one in principle knows what the whole sum is if you can compute the GV invariance. But now we're not computing the GV invariance of something that was let's say had five complex structure of a five dimensional moduli space. We're computing the GV invariance of a let's say 113 dimensional moduli space. So now we have to do the GV calculation on the big side, which is where it's much harder. So and this is related to the only remaining physics point I wanna talk about before sort of a protracted summary is the convergence of the world sheet instanton sum. So let me define this quantity CN, so N's accounting number, and this is a prefactor, this is a GV invariant for N times some curve class Q. The Q some curve class we're interested in and this is the magnitude of the Nth term in the world sheet instanton sum. So T is a vector of scalar parameters, Q dot T tells you how suppressed that thing is, and this is asking if I know about an instanton wrapping a curve once, how worried should I be about wrapping it twice, et cetera. At big enough volume, not very worried, but it depends on how fast the prefactors grow as N grows. So that's what we better figure out. So the series converted is if CN goes to zero as N goes to infinity, and so we just compute the GV invariants for small curves and we check. For the curves that are big, you're just safe. It's automatically easy, but how about all the small curves? There are some small curves, so we just did it. There's two kinds of curves. We'll call them nilpotent and potent. The nilpotent ones are the ones where the GV invariant becomes zero after a finite number of terms in the series. And the potent ones are the ones that come in infinite series. Nilpotent curves actually are safely collapsible. When you collapse them to zero size, they give finitely many polylogs, and we include those explicitly in all our calculations. They are incorporated. The potent ones are different. The potent ones, in principle, keep giving more and more complicated terms. So we kept searching until we found, okay, we got bored at the point at which we'd found 1728 rays of potent curves. And so we looked along those rays and asked, this is for one of our five examples. And we asked, how bad are things? Does the series converge? So let's think about this for a minute. So along multiples of one particular curve, the GV invariants look something like this. So here's one example. So they're growing. And then we computed it at 100 C and you get something like this. So this is certainly the first computation of GV invariants of this kind of degree in this kind of dimension. And we can just go on and on if you want. And it'd be fun to find some structures to learn about in there. But what you can see, well you can't see it from this, but if you take logs and plot it in Mathematica or something, you'll see it's an exponential increase with a very stable rate. That's why we wanted to do the computation out that far. But what is the rate? So if T is large enough, if the curves are big enough, then the decay here will dominate the growth there. If this is exponentially growing with N and this is exponentially decaying with N, which obviously is, because there's the N, then the decay will win. The question is, where is this going to happen? Well, so there's some encouraging work from long ago from the classic paper Kandel-Listet-Lasser, Green and Parks on the Quintet, where if you just worked with the first term in the series, the famous 2875, where the GV invariant is 2875 and use that to try to estimate the radius of convergence, you find the radius of convergence is T is 1.27. And then when they did the exact calculation, it was actually 1.2. So not a very big change from taking the leading term. Okay, and so here we're not taking just the leading term, we're taking a stupid number of terms. And so the question is just in our vacua, is T large enough or not? Are we inside the radius of convergence or not? And so here's the result. What I'm plotting here is log of CN versus N. And you note that they're all lines, this is a log plot, since they're lines that means they're exponentially decaying and they're all lines that are angled downward. There's a histogram of the lines of the slope really. And what it's showing you is that it's not like we failed to detect a huge population here, we think we'd really caught all of them and there just aren't any that have positive slope. So since all the lines have negative slope, that means every curve that we could find was either safe automatically or it corresponded to a convergent series expansion that we can compute and incorporate. And so this is why I claim we understand corrections to the scalar potential. These are only the curvature corrections, there are other corrections, but those are when the other ones are suppressed by explicit factors of G string, which is 10 to the minus two in this case. And so I really think these are under extremely good control. We calculated everything in the system except for the Fafian pre-factors, which we only showed are non-zero. There's a largest correction that we were able to find just of order 10 to the minus five. Okay, so let me comment on why this was feasible and then this comes back to some relationships to Mike's work and then close. So the general expectation is that finding small vacuum energy should cost you a compute time or an effort or something one over that number, the vacuum energy. So like in an indimensional Buseau-Polchinski flex landscape, there are vacua generally with vacuum energy 10 to the minus N, but you expect to have to search through 10 to the N things to find them. They're rare and you have to search one over the rarity to get one. So this is sort of a general and reasonable expectation. So what we have in the example I keep showing you of order five and of order 100, like I showed you five comma 113. And the flex landscape is low dimensional. You know, it was a 10 dimensional flex lattice. So what's going on? How can we succeed at all in finding like 10 to the minus 123 or something in a 10 dimensional lattice? That shouldn't have been possible. It's certainly not a 10 to the minus lattice dimension effect, so what's going on? Well, we're doing things differently, right? In a Buseau-Polchinski type picture, what one is doing is fine tuning a vast number of order one terms to high precision. So you imagine, so here Q is a vector of say flex integers and Cij is some quadratic form. And what you imagine doing is fine tuning the value of that quadratic form pairing integer vectors together here against some given constant negative thing. And if you take this quadratic form to have some M plank to the fourth value, but very precisely tuned to almost exactly cancel this, you can get some exponentially small thing, but that will cost you. That will be hard to do. In our construction, we're not doing that. Rather, we're striking out all the perturbative contributions from the beginning. And so the step is exact because of the explosive choice of flux quanta. And so all we're really doing is balancing exponentially small terms against each other, not balancing a whole lot of terms against some order one term. And that's why it's exponentially easier this way. Now, it's certainly not the case that all vacuo with small vacuum energy in the 2B flex landscape are kind. In fact, ours are a rare subset, but they're a numerous enough subset that they exist, they can be found. And the more general ones that are exponentially hard to find, we shall see when people manage to find them. So now, I keep saying that the effort involved is polynomial, but coming back to Alessandro's question, some computational advances were required in order to enumerate the integers in the first place, in order to be able to do the manipulations of integers. And so we had to be able to construct oriental folds, compute GV invariance and killer cones, uplift to F theory, enumerate floppable curves, all these things at very large hodge numbers and then choose quantized fluxes giving small W note. That was a disantene problem that we had to solve and we had to do so automatically on a very large scale. But our software is sufficient for the task and so we're able to do it. And then having done that with those data in hand, we can then work polynomially hard and find exponentially small CC. So the final answer is, you think back to putting all the data on one slide, on a T-shirt or something, they're expressed in terms of integers and you can verify an awful lot of it by hand. So in conclusion, we've given explicit constructions of supersymmetric ADS-4 vacua in compactifications of type two B string theory on Calabi out three-fold oriental folds. The stabilization is that weak string coupling, large complex structure, large Einstein frame volume. These are very heavily tested. We judge them to be quite robust. And they're incarnations of the KKLT scenario, although with some special structures. And our claim then is that supersymmetric KKLT vacua are part of the string landscape. The mechanism we used for small W-naught led to exponentially small values of vacuum energy. Because the search is automated, large-scale studies are possible, we're engaged in some, more results will come out. The uplift to decider space is very much a question for the future. Happy birthday, Mike. So any question after this beautiful talk cost us? Yeah, that's very impressive. But I have a very simple question. Can you show us the loophole in the swamp land, the arguments that would argue against this? And let me make it more precise. Are you sure? I mean, you have a very nice control of the vacuum, but what about the spectrum? Could you have very light stuff popping up, powers, you know, which show to you that there's some hidden KK scale that is very large? Yeah, you mentioned extremely light excitations, but then you didn't look back at the power. Yes, the light, you know, the low scale of supersymmetry breaking, et cetera, yeah, those things don't come in. So there are fields that tend to the minus 33 electron volts or something, so that makes it phenomenologically pointless, but it doesn't render it inconsistent. The Kaluzha Klein scale is a little bit below the Planck scale. Now, let me respond to your question by saying I think we should have a discussion afterward about general swamp land arguments unless you wanna give us, if you wanna relate a specific one. To the best of my knowledge, the claims that would say such things can't happen are of the form with simple ingredients, it doesn't happen in asymptotic regimes, it doesn't happen, therefore it will never happen. And we didn't work exactly in asymptotic regimes, we certainly did not only use simple ingredients, and so I claim it did happen. Now, but then a variant of your question is setting aside whether anyone might have written a paper claiming this shouldn't have worked, you could ask me, what am I most worried about in this construction? Right, and there I have a very clear answer. I'm most worried about corrections to the scalar potential from N equals two to N equals one breaking effects. Right, so though these descending lines I showed you were the N equals two curvature corrections to what becomes the scalar potential. Very nice, those are safe. But what about the effects from localized brains? Well, those are controlled by factors of the string coupling. But if I try to find vacua, for example, where G string is a quarter or something like that, you know, I really don't know. And you might worry that the three loop correction to the scalar potential has an anomalously large coefficient and destabilizes some of these things. That's the level of worry we have to get to. It's only in that case that I can see that these vacua could be invalidated. But you don't worry about some cycle becoming extremely small and generating towers of whites. Oh yeah, yeah, yeah. No, no, we worried very much. So what kinds of towers would they generate? You know, where would we see that? Well, we would see it in here. They would be, you would see an instanton series would come down and you get killed by it. And so what we had to do, upon finding, I showed you that multicolored plot where you see a point in the scalar moduli space. Once you get to a point in the scalar moduli space, you know the vevs of the t's. You then have to take this formula, knowing the GV invariance and ask, at that point in moduli space, am I killed by a instanton series that has come down or not? If we didn't have the ability to compute the GV invariance on the hard side where the dimension is large, we would have had to say, eh, probably killed, in fact. I mean, I was 50-50 before we did it really. I had no idea whether we could possibly be safe in that regime, but then we just checked and it turned out that we were, in fact, inside the radius of convergence for those series. And so now, it can't be, we show, I mean, look, G string is really small, so you're not killed by like PQ strings or some worst stuff. The World Sheet Instantons are the worst thing. The only thing we can be killed by, I claim, is not an instanton at all. It's a loop correction to the scalar potential with anomalous of the large effect. Do you know that the diameter of the manifold isn't getting big? I mean, does that literally follow from? Yeah, the diameter of the manifold, let me show you the sizes here. So the volume is not, the volume is decent, but there's a sixth power in there. So the diameter is not very big. The volume is 10 to the second. It's not an isotropic in some way if you make that simple relation. Let's see, wait, where am I assuming, the VEVA, the T's? Well, the sixth through the volume is a lower down on the diameter, and it's a an isotropic, of course. Oh yeah, sure. Sure, yeah, no, I'm not saying I know the diameter, but do you, are you worried? Strong reason to think it, but it is a possible loophole. In which term in the effective action would you expect to see an action? You're giving a KK tower, you know? It's not something that shows up directly in what you've done in, you know. Right, so I mean, this space is an isotropic in the following simple sense. Some of the curves are very small, such that it's not the case that the, might not, it isn't completely satisfactory to say, well, look, we found one kind of anisotropy, maybe that's the only one that's present, but we did find one kind of anisotropy here, which is that the volume is not just, is not just the case that the four cycle volumes to the three halves power give the total volume. Everything, all the little bits are small, and the total space is decent in size. You know, the curves are relatively small. If you look at the, just the polytope, is it kind of an isotropic or anisotropic, what is just naively looking at the polytope? There's a pretty decent polytope. The things that are unusual about the polytopes that give good vacua are only that they have a sizable tadpole, so. Right, but I mean, the polytope does have a shape, right? Yeah, this one, we've never, we certainly tried very hard, but not very systematically to learn. Some kind of, you know, something one can say about the diameter just from simple, I don't think one could literally compute it, but there might be more long bits that show that it's not an isotropic, because that's the only loop that kind of appeared, you know, comes to my mind. I mean, there's these other arguments in this most recent Swampland paper, and without the question of whether there are dual gauge theories to these constructions, and I have a conjecture idea of what's wrong with those arguments too, but that's another discussion. Okay, John had the question. So it's very impressive that you can get such a small, cosmological constant. It's negative, right? I don't know. So why is it relevant to KKLT in that case? This is an example of the supersymmetric step in the KKLT. So in the KKLT construction, what they do is they first find an N equals one supersymmetric ADS-4 vacuum, and then they uplift it. So in terms of an overall modulus, let's say real part of T potential, the first step is to find an N equals one ADS-4 vacuum, and that's coming from the data that I wrote here from a flex super potential and some of non-perturbative terms for the KKLT module. And then what happened in the KKLT paper, right, is they added an anti-brain and found that they got a new minimum here. Now, let's talk about Decider briefly. There's no reason known to me that it should be impossible to do this second step. However, it's gonna involve a nightmarish change of technology because the machinery I'm using here is consistently exploiting integer data, holomorphic stuff. You have to switch over to PDE land because to really argue that you understand an anti-brain configuration, you have to find a warp throat. We did that. We wrote a paper showing you can find warp throats with the right hierarchies. That's actually settled. And other people, the group of Blumenhagen did the same thing and by slightly different means, they agree. But then if you put an anti-brain in it, you have to argue that the back reaction is very well controlled and doesn't disrupt anything else. I just have no confidence yet that we can demonstrate that. I don't see a reason why it shouldn't work, but that's not the same as saying I can do it. But one doesn't have to try that way, right? You could just try to find supersymmetry breaking effects in the same EFT. And that's what we're trying to do. So that would be like KKLT, but not. It would be getting to some picture that looks like this without ever adding an anti-brain. And now, on this question of the positiveness of the CC, in these examples, the CC is negative. I don't see any reason whatsoever that would prevent, not saying I can do it yet, because we haven't yet succeeded. I don't see why we shouldn't be able in 2022 or 2023 to write a paper with a very small, maybe not 10 to the minus 100 and 22, but a very small positive CC. But with the Suzy breaking scale, such that the Gravitino Mass is 10 to the minus 33 electron volts. So on field three grounds, it's sort of silly. It's not solving the real problem. It's never gonna solve the real problem. Completely unrealistic. Yeah, yeah. And it's baked into the construction that this kind of vacuum, which is exponentially easier to find. So now I was talking about these two kinds of things, right, where you have either you're canceling some perturbative things against each other. That's what I would call sort of a generic solution. If you look really deep, you look 10 to the plus 123 deep in there, you might find a really small positive CC and large Suzy breaking. And that would be great. Of course, you're gonna have to dig very deep. These things are the ones I've described are much easier to find, but they will never succeed in having both large Suzy breaking and exponentially small positive CC. So we're not trying that. But I think it would be fun to exhibit a decider vacuum in this context, even if it has an unrealistic value of the CC, just to say, look, it exists. These are its properties. One can go play with it. And once we have one, I would suppose we'll eventually just be able to produce millions at a click. Okay, so maybe we have time for one last short question. If any. No? Okay, so let's, thanks, Lian.