 Okay, yeah, thank you very much. Thanks to the organizers for inviting me. I really appreciate the opportunity to tell you about something brand new. In fact, I have never talked about this before. So it's gonna be not very polished, I have to say. And it's also very much work in progress. So this is work together with Mustafa Amin. People told me Mustafa already gave a brilliant talk on this in the parallel session, but I have 35 minutes. So with the benefit of more time, I'm gonna give you a few more details more than he could have covered back then. So let me start by first telling you my impression of recent CMB data. So one thing I've learned from the CMB data is that the universe is remarkably simple. It turns out, as far as the early universe theories is concerned, we only need two numbers to describe all of these fluctuations in the microwave background. We need an amplitude for the fluctuations and a slight scale dependence. And all more complicated features that could have arisen like non-Gaussianity, non-adiabaticity, features in the spectrum and so on, we so far haven't seen. At the same time, the fundamental theories that we write down to try and describe this data are remarkably complex. So in string compactifications, you have sometimes hundreds of fields with complex interactions between them, many scales that often aren't be coupled very far from each other. So what I'd like to ask in broadly is how that simplicity of the data arises from these complex models that we typically find, okay? Sometimes when these complex models are presented to you, they're presented in the simplest possible way by kind of trying to, as hard as people can, to decouple all of these complications and focus on a sector that's relatively simple to analyze. But that's often a lamppost effect, an effect that that's the region where we can calculate and so we're forcing ourselves into that corner. So what I'd like to discuss is trying to go far away from that regime and try to just take this complexity at face value and develop new tools to actually be describing. In a regime that's very far from what's been studied previously, I think. So one thing that this complexity suggests maybe is that we can have non-trivial dynamics during inflation and after heating. So these many fields can interact with each other, there can be many twists and turns in the background field trajectories, both during inflation and in their approach to the global minimum at reheating. And so one way of modeling, broadly, of modeling this kind of dynamics is by allowing all the couplings of the different fields to have a complicated time dependence, okay? So there can be spikes and non-adiabatic events in these evolutions. So what I've indicated here is a time dependent mass for one of the fields, but other couplings could be evolving in a similar way. So the idea is that you have these non-adiabatic events at random times and with maybe random strengths, okay? There might be some probability distribution from underlying theory that describes the statistics, okay? But the borough picture is something like this. So the question is going to be, how do we compute in situations like that, okay? So what we're going to do is we're going to, so to a kind of metaphysis, in fact, such a picture is very familiar because they have seen it before in wires. So current conduction in wires, in fact, is described by a random potential that's created by impurities inside of a wire that looks something like this, okay? There also you have random locations of these impurities that create a potential for the electron wave function inside of the wire with random strength and random locations, okay? So the qualitative picture, if you just exchange space with time is very similar. And I'm going to show you in fact that this is more than just a cheap analogy, okay? It's in fact a mathematical equivalence. You can map the equations one to one to each other. And because you can map the equations, you might wonder if you can also map phenomena, okay? Same equations would have same solutions. So what we're going to ask is, is there an analogy or an equivalence, not just in the equations, but also in the phenomena? So one very important phenomena in the context of these disordered wires is the phenomenon of Anderson localization. And in fact, Anderson localization is a strikingly strong result and universal result in one dimension. In higher dimensions, whether the wave function inside of a wire localizes or not depends on the strength of the impurities. Turns out in one dimension, no matter how small the impurity is, there will always be an exponential localization of the wave function of the electron inside of the wire at zero temperature at least. So at zero temperature, all one dimensional wires are insulators, yeah? To an exponentially good accuracy. But because we are mapping space to time and we only have one dimension of time, we're going to be precisely in that situation where we're describing the analog of a one dimensional wire and we should be expecting to find something equivalent to Anderson localization. And what I'm going to show you is that Anderson localization in the spatial case will map to exponential particle reduction in time, okay? And what you then also might ask is, so this is a very universal result and the reason it became universal is because you have a lot of complexity inside of the wire, whether that complexity maps to universality in the time dependent case as well, okay? We haven't completely answered that question yet, but I'm going to show you some hints of that towards the end. Okay, so this is going to be my outline. So first I want to flesh out a little bit more this correspondence between Anderson localization and stochastic particle production. I'm going to first show you this for just one field or one purely one dimensional wire. Then we're going to extend this to multiple fields or wires with a finite thickness, okay? And then I will have some comments at the end for how this could be applied to early universe scenarios both during inflation and at reheating and speculate whether the simplicity that we see in the sky could be arising from, not just from a simple underlying theory, but from a very complex one, but simplicity is actually an emergent phenomenon. Okay, and I've warned you before that this is working progress, I warned you again. Okay, of course there's a large literature both on the condensed matter side and in cosmology. These are some of the papers that we found very useful. I've learned a lot from those papers and others. And as I go through, I will highlight some of these papers again when they're relevant. Okay, so let me start showing a little bit more of the details of this correspondence. So first of all, let me just remind you about wires, okay? So electrons and wires are described by the time-dependent Schrodinger equation. So this equation has a potential for the electron wave function as a function of X. We're going to allow this potential to have random impurities, so it has a shape like this. And then the transmission of electrons in the wire is just described by one-dimensional quantum mechanics scattering problem where there's an incoming electron wave that gets transmitted and reflected at each scattering site. And the superposition of all of those waves will give you the total transmission through the wire, okay? Particular production in cosmology is described by the time-dependent Klein-Gordon equation for at least a linearized equation is described by a time-dependent equation for each Fourier mode chi of k. This, I've written this in conformal time, so all the effects of the expansion of the universe are subsumed in the time-dependence of the mass here. So this time-dependence could have a non-adiabatic piece that comes from the Hubble expansion, and it could also have non-adiabatic events of this random form that I've sketched before. But if you stare at these two equations, you can see that they want to one map to each other. If you exchange space with time and you map the potential to minus the mass squared, okay? And then for a non-adiabatic system, the energy of the electrons is just given by the momentum squared so that mapping already applies for each Fourier mode, okay? One slight complication is that you have to actually flip the sign when you go from space to time because the boundary conditions have changed, okay? And I'm gonna explain this in a little bit more detail in a moment, okay? So these two equations really map each other if you just do an identification of different variables. So I've already said that when you study the collective scattering of many of these electron waves, inside of a wire, you get this phenomenon of Anderson localization, where as you're increasing the length of the wire, the resistivity increases exponentially or the transmission decreases exponentially. And so this is a famous paper from almost 50 years ago, actually more than 50 years ago, Phil Anderson. So now if we're flipping space to time with a change in sign, this then corresponds to an exponential increase in the wave function of the mode function for each of these Fourier modes. And so the number density of particles associated with these additional fields will be increasing exponentially with time. So let me explain just a few more details about how we can interpret particle production as a scattering problem. So in the spatial case, which I'm showing up here, we have an incoming wave indicated by this e to the i kx up there, that incoming wave gets transmitted, a fraction of that incoming wave gets transmitted, a fraction of that wave gets reflected, okay? And we have a transmission probability that's given simply by the square of the transmission amplitude for this wave, okay? So the way I'm mapping this to the time dependence case is now I have a positive frequency mode that's coming in here. And this non-adiabatic event will split this positive frequency mode into a mixture of positive and negative frequencies with amplitudes that are determined just by this new normalization for this incoming wave and these transmission and reflection amplitudes, okay? So then I can define the number density of particles that are produced at a single production event simply as the amplitude squared of the negative frequency component of this wave, okay? And so if you do a little bit of algebra, you can then relate the number density at the jth particle production event to the inverse of the transmission probability that I would have had in the spatial scattering, okay? So that's just a correspondence that's gonna be relevant for us in a moment, okay? And that shows you already that what we associate with minimal transmission up here will corresponds to a maximal amount of particle production. So that correspond that fact that we can treat particle production as a quantum mechanical scattering problem, of course, is very well known, okay? The famous papers that describe this. So let me use this picture to give you a heuristic derivation of this phenomenon of Anderson-Lakel localization. So what we want to do now is instead of studying a single scattering, we want to chain together many of these scattering events. So each scattering can be described by a transfer matrix that simply tells us how we transfer particles from one side of the scatterer to the other side, okay? And so let's just look at two of those scattering events first. When we chain these two matrices together, we get a total transmission across two of those scatterers in terms of the product of the individual transmission across each single individual scatterings. And then a factor in the denominator that depends on the reflection probabilities at each individual scatterings. And in fact, this lower term also depends on the relative phase that is accumulated between the two scattering events, okay? What's then next, what's important to derive this effect is to average over this relative phase. If we imagine that these impurities are randomly spaced, then the phases will be on average random, okay? And when we do average, then this phase dependence here will disappear. And what we get is that the logarithm of the total transmission when averaged over phase simply becomes the sum of the logarithms of the individual transmission probabilities, okay? So we get this logarithmic addition law, okay? Which of course is very useful because now we want to study not just two scatterings, but a large number of them. But the generalization of course is trivial that we find that the total transmission is simply given by adding up the logarithm of each individual transmission probability, okay? And by exponentiating this result, you get a typical or most probable transmission probability, which is exponentially decreasing with the total length of the system, all right? I'll give you a slightly more technical definition of what I mean by typical in a moment. Okay, but because we have already identified how to map from one to the other, we can directly map this result of total transmission to total number density. And since the total number density depends inversely on the total transmission, this will map to an exponential increasing number density as a function of time, okay? And so what I have to find here is that what's gonna be important is this exponent that tells us the rate of increase as a function of time, which could be, which can physically be identified with a mean scattering rate. So here delta tau is the mean distance and time between the different scatterers, while the mean of the logarithm of the individual scattering parameter as it's microscopically, the strength, the average strength of each of the scattering events, okay? So this was heuristic. I'm gonna show a little bit more details in a moment. And in fact, we're gonna derive a complete statistical distribution of this process. So I'm gonna show you the, basically a complete understanding of not just the mean evolution of the number density, but also its variance in higher moments, okay? So let me just illustrate this first with a plot. So what I've shown here is one realization of this process, where we have stimulated this random sequence of particle production events, just drawn from some distribution of some strength and some mean distance between the scatterers. And then you of course numerically you can just solve the evolution equations. You get this trajectory of particle production. Notice that this is a logarithmic scale on the Y axis. So this is really exponential growth in the occupation number per mode. You can do this many times. So you simulate many different members of this ensemble and you'll get different trajectories. And so you see of course that these trajectories trace out some kind of random walk as a function of time, okay? And in fact, this can be made precise. You can derive an equation, a Fokker-Planck equation describes the final probability distribution of the number of produced particles as a function of time, okay? So if I take a snapshot at a final moment in time and ask what's the distribution of number of particles per K mode, that would have to satisfy this Fokker-Planck equation, okay? So in principle, this Fokker-Planck equation statistically summarizes the system. It turns out actually that in one D, this Fokker-Planck equation kind of remarkably is exactly solvable, okay? But the solution is not very illuminating, so I won't show it to you. Instead what's more convenient is to look instead at moments of this Fokker-Planck equation. So instead of looking at the single equation, you can also look at a hierarchy of coupled equations for each of the moments. So there's a time evolution of the mean of the number density, there's a time evolution of the variance, okay? Those two equations can be solved. And so, and that gives you a rough characterization of the distribution function for this number density. So two quantities that will be useful for characterizing the shape of that distribution is the mean of the number density as a function of time. And also the most probable or the typical number density, which is defined as the exponentiation of the mean of the logarithm of the number density, okay? So what you should be noticing here is that the mean of the number density in fact is growing faster with time than the most probable value, okay? Which is just a reflection of the fact that this is a very skewed distribution that the mean is more sensitive to rare fluctuation away from the most probable value. In fact, at late times, of course, unsurprisingly, you can show that this distribution approaches a log normal distribution because we had this law of simply adding the logarithms of transmissions on number densities. So the distribution for the logs will be Gaussian and so the distributions for the number density will be log normal. So it's quite a skewed distribution where the most probable value is smaller than the mean value, okay? You can characterize this further by actually deriving a solution for the variance or for higher moments of this distribution as well. In fact, you can test this. So this is not a fit, this is just showing you how the typical number density of particles produced compares with the simulation of all of these random trajectories. Okay, this was for the single field case. So let me show you a few more details of how this works when you go to high-dimensional situations. So first of all, of course, let me tell you the true fact that real wires are not one-dimensional. So real wires, of course, have a finite thickness and the finite thickness of a wire means that you can now excite the transverse modes of the electron wave functions, okay? These will be quantized and so at finite energy, you will have a finite number of transverse momentum eigenstates for the electron wave function and those correspond to multiple channels of transmission, okay? So the longitudinal transmission along a wire can be described by transmission in multiple channels and it turns out that that system of transmission in multiple channels maps precisely to the dynamics of multiple fields in the time-dependent case, okay? Again, there's a one-to-one correspondence. Here I've shown it schematically because in fact, the multi-field evolution of fields could be a little bit more complicated than just being described by a mass matrix but what's gonna be important for us is just that there's a linear mapping from some initial state which has no particles or a few particles in each field to some final states where you have produced some particles. So any linear mapping will be described by this type of formalism. So the idea is the same as before. So here in the time-dependent case, you start with a vector. This vector summarizes the amplitudes in the positive and negative frequency, positive and negative frequency states of the system at early times and then as the system evolves in time along this, you produce particles in the negative frequency in the negative frequency modes until you reach a final state given here and again, each of these production events is described by a transfer matrix. Now this transfer matrix is high dimensional. So previously this was a two by two matrix because it has to have to encode the complex transmission and reflection amplitudes. Now it's two N times two N dimensional because it will have to encode the transmission for each of the NF channels, okay? So all I have to do to generalize from a single field to multiple fields is to enlarge in the space of these matrices. Then again, you chain these matrices together to get a total transmission, okay? So this is just a product of random matrices at each particle production event. And then given that total transmission matrix, you can define a matrix for the total transmission probability. You can define a matrix for the total number density. This matrix has several entries because it describes the number density produced in each of the different fields. As before, there will be a Fokker-Planck equation that describes the evolution of the, now it's the evolution of the eigenvalues of this number density matrix as a function of time, okay? So there's a joint probability distribution for the number of particles produced in each of the different fields. There's another, there's again a scale which describes the mean scattering rate, which is now averaged over the different fields. But again, this is just an equation that summarizes the entire statistical information about this system. For this equation, I actually don't know that there's an explicit solution, but again, we can go do this trick and just look at the hierarchy of moments in the number density. The reason that it's a useful trick is because it turns this PDE into a coupled set of ODEs, okay, which are easier to deal with. This time it turns out that we have, we have three coupled equations that close the system, okay? So there's an equation for the evolution of the mean number density and equation for the square of the number density as before. And then there's a third quantity that arises, which is the sum of the squares of the eigenvalues, which also arises as source terms here, for example, okay? So we need three equations to close the system, but then these three equations, in fact, kind of remarkably have an exact solution again. So here I've shown you just some simple limits of that solution. As before, we get an evolution of the mean number density or total number density of particles. This is now summed over all of the different channels. Actually takes a very simple form, it's the same as before, except we have n copies of that number density, so the simply addition of the number of particles produced for each of the fields. And we have expressions for the variance and higher moments of this distribution function as well. Okay, you can also identify, as before, the most probable value of that distribution. In this case, you have to look at the evolution of the logarithm, the expectation value of the logarithm of the number density. Here we haven't found an exact solution, but you can find the solution in the regime of interest, which is at late times, so the asymptotic behavior of the system at sufficiently late times, again, will become exponential, and it has a slight dependence in the exponent on the number of fields, okay? So these kind of universal results are interesting, I find, because they only depend on two parameters, so some mean scattering rate and the number, a weak dependence on the total number of fields. Wendy? Okay. So let me make a few, this is a slightly digression, let me make a few comments about an alternative way of describing the system, which is using some very powerful results from random matrix theory. So what I've shown you just now was that you can describe the system, in fact, as a product of random matrices, random transfer matrices that describe how you go from one state before scattering to a state after the scattering. And the two large n that, in principle, help us to simplify this analysis statistically, okay? The first large n is that, in principle, there could be a large number of fields, so this transfer matrix could be high-dimensional, okay? And then random matrix theory would give us information about the spectrum of eigenvalues of this matrix, okay? And this would feed directly into this rate of growth, mu that I was showing you, you would get a probability distribution, in principle, for that rate of particle production. This is something we haven't explored very much yet, so I wanna focus on the second feature of random matrix theory. So there's a second large n, that, in principle, could have a large number of scatterings, okay? And when you have a large number of scatterings, then there's a famous result, again, in random matrix theory that tells you that it becomes less and less probabilistic over time. This is a version, in fact, of the central limit theorem, where the eigenvalues of this matrix here are random, when you chain very many of them together, the eigenvalues of the final matrix that you get become less and less random, okay? So in fact, there's a theorem that tells us, if you go to the limit of infinite number of scatterings, then the random eigenvalues of this combination of the transfer matrix become non-random, and in fact, exponentially increasing with the number of scatterings, with an exponent that's independent of the number of scatterings. So it proves to you that you get purely exponential growth, just like we were finding by solving the Fokker-Planck equation, but now from a slightly more high-brow mathematical perspective, okay? So these are also called Lyapunov exponents, and so on, the famous analogies with other areas of physics. Okay, so then in the last five minutes, let me give you a brief outlook of how all of this machinery in formalism, in principle, we think can be applied to many interesting situations. So first let me highlight again that we have seen some hints of universality emerging just in the number density of particles being produced, because we have found that the number density as a function of time isn't a very complicated function, but it's this simple exponential with a growth rate that will depend on the microscopic details, but all of the statistics of this distribution of numbers of particles produced are characterized by just two parameters, a mean scattering rate, and then possibly a number of fields. So that's from the microscopic point of view, that's the only information you have to feed the system to predict its future evolution. What remains to be seen is how that very simple characteristic in the number of particles produced is reflected in any cosmological observables, okay? So we have so far treated the first step of this process where through some complicated background dynamics, we have generated statistics of numbers of particles produced. What you'd like to do now is you'd like to take this input and ask in specific situations, how does that serve so as to things that we observe, namely correlators of curvature perturbations. So I'm just gonna highlight two obvious applications. So one has already, I mean the basic scheme here has been beautifully explained in this paper by, I mean Deanna Nassir and Raphael Porto and friends. So what they have shown is they have shown of how you take such a source of particles being produced in the early universe and they calculate the back reaction onto permodial curvature perturbations. They've also explained how there are two different types of sources that you can get a stochastic source which just arises even in the absence of a preexisting curvature perturbation. And basically this is what we've computed so far. And then there's a second effect if you do have a preexisting curvature fluctuations that's a linear response that changes the solution locally for the number of particles being produced. And then you would have to feed that linear response back into this evolution equation to determine what's the effect on the statistics of the curvature perturbation. So if you want to learn more about this, I recommend this paper. This has been beautifully explained there. We're now trying to use their formulas and to see what these results that we have found for the statistics of this side of the equation, what it implies for the statistics on for the final observables. Then a second application is reheating. So what people usually do in reheating is they study very simple systems where you have an oscillation of a field around the quadratic minimum of a potential and they couple it to some daughter fields that then reheat the universe. I think what the formula that we've developed allows us to do now is to study much more complicated approaches to some minimum where the field might take many twists and turns as it approaches this final state. Okay, let me maybe skip this. So conclusions. So what I've shown you first was that there's an exact mathematical mapping between the time-dependent line coordinate equation for a single Fourier mode and the time-independent Schrodinger equation in space for the wave function of the electron. That has motivated us to look at a mapping between different phenomena that you can find in both of the systems. And I think we've only scratched the tip of an iceberg. I mean, this literature here is amazingly rich. Okay, I was amazed when I looked at how much people know about wires, okay? And I feel we know much less about this side of the equation and you can, by understanding this literature better, we're hoping to transfer many of the insights that were found for wires into cosmology. And ultimately, as I try to motivate at the beginning, what we'd like to decide is whether the earlier was something like this or something like this. But the reason why we have simple observations is because the theory really is simple at its fundamental level. There's just a single field, this m squared, phi squared, there's a shift symmetry, everything is beautiful. Or whether the simplicity is an emergent effect and the underlying dynamics is actually very complicated but it's complicated enough so that the collective behavior of the system becomes simple again, okay? This will take more time to decide but we'd like to see this, you know, observational distinctions between those two cases. But I'm hoping that our formalism gives a first step to maybe get there. Thanks again. Here we have a question. So actually I have a couple of questions. Do you know how we can incorporate the back reaction? So you have a lot of particle production so within your, this could be related to one of the last slides that you showed. Yeah, that's right. This one, yeah. So I was trying to sketch it here. There could, of course, be two types of back reaction. There could be a zero mode, okay? Or there will be a zero mode that will renormalize your expansion rate and the background density and so on, okay? So that will be the K equals zero solution of this number. Notice that we were solving these numbers of particles for single Fourier mode K, okay? The zero mode of that solution would correspond to the renormalization of the homogeneous background. That, of course, has to be taken into account. And then for finite K, we get corrections to perturbations, okay? So these two types of back reaction. I was wondering how this is different from the standard flowcare theorem and the Heel's equation and the, okay. Yeah, standard flowcare, of course, is what's important there is resonances, okay? There, I mean, for reheating, for example, it's this scenario where you have a deterministic oscillation frequency that resonates with the frequency of the number of particles produced and you have these stability and instability bands. And yeah, this is not like that because it has an intrinsic stochastic contribution in the background dynamics, okay? Which typically actually spoils this flowcare theory because it spoils these resonance and instability bands, okay? So what you would like to ask here, for example, is do you still get sufficient growth of particles? You know, how quickly does the universe reheat in a situation like this versus a situation like that? If you don't have these strong resonances between bands, okay? The indications that it's still, as we saw, we still get exponential growth of particles. So it's still, as long as this mean scattering rate is large enough, you will still get a big number of particles being produced, but in a very different way. We can talk later. Let's be fair. So any other questions? I'll be back. What kind of functional form that you consider for autocorrelation function of mass as a function of time or for potential? Okay, yeah, so this wiggly thing that I was sketching. So that we were usually taking the case where you split the mass into an adiabatic piece, okay? And the stochastic piece, which is so efficiently localized, okay? So the simplest case, you would just study delta function. So if the wavelength of the particles that you're producing is large relative to the size of the scatterers, they won't resolve the fine-grained structure of the scattering event, okay? And then the delta functions will be a good approximation. You know, when the wavelength actually matches the size of the scattering, then you will become sensitive. And there we studied like such type potential so you could study Gaussians. And you'll get some small imprints in that microscopic status. That's where the microscopic information really comes in in the statistics of these scattering events, okay? And you can play with this, but since at the end of the day, everything gets then summarized into a single number, this mu function, okay? There's a lot of degeneracy in different types of microscopic scattering being able to lead to the same kind of universal particle growth. Okay, so I think we have time for one more question. You said about mass dependence on the time. So maybe, so if we consider about the expanding universe effect, I think the mass time, how to say? So mass function relating to the expanding universe. So I think now you use the randomly for the mass differences. But... Can you say that again? I'm sorry. Now you, I think you use Gaussian mass difference, but so in the expanding universe, then how do you treatment the time we depend this on? Yeah, I think it's the same. So in the expanding universe what we'll be changing is of course the mode function, even in the absence of these stochastic events will be different. And in fact, you get non-adiabatic behavior outside of the horizon. That's how we think about actually the usual vacuum fluctuations are like a particle production when the mode crosses the horizon and you actually violate adiabaticity, okay? But you can, as I said, the reason is this whole formalism is very insensitive because all it cares about is you have a linear mapping from some initial mode function, which could be a, you know, Bonch-Davies type mode function even in the absence of these effects to some final states which is given by another mode function with a mixing of positive and negative frequencies, okay? So yeah, that's why I was formulating things in conformal time where this mass as a function of conformal time does have the Hubble expansion built into it. Inside of the horizon that additional term will be an adiabatic piece, okay? So it won't lead to much particle production and inside of the horizon most of the particle production is coming from these stochastic events. And outside of the horizon there will be a superposition of the things that come from expansion and the things that come from these stochastic localized events, yeah? Okay, so thank you very much. So let's thank the speaker again. And...