 Last lecture of today's session, we welcome, back again, Dr. Fischer, who will lecture on the collision revolution in high dimensions. OK, thank you. Thank you. So again, please put questions in the chat, and I will try to keep an eye on them. And as Antonio will prompt me, if I haven't seen one of them. So yesterday, we talked about and assembled communities, which a lot of speakers have talked about, and particularly talked about the effects of having many islands with migration between them and what the effects of that are. So what I'm going to talk about first today is how one gets to some of the results that I talked about, the dynamical mean field theory. And unfortunately, I'm not going to be able to go to that in huge detail. It's quite technical and difficult to actually solve the dynamical mean field equations. So I will give what they are and sort of motivate it and then sort of give some heuristics about those. But if you're interested in that in detail, it's in this paper in PNAS from Michael Pearson, Atisha Agarwal, who will be involved in all of this work. Some of the recent things that I'm going to mention or come to wars I won't really get to is current work, which is also involves Atitia Mahadevan, who where if we were all in Trieste together you would meet since he's at the workshop at the school. So after talking about that, I'll then talk about some of the robustness of this phase that comes that I showed last time, the spatiotemporal chaotic phase. And then I'll turn to things that are really open and ongoing, asking about the question of whether communities like this can evolve or how they evolve in the future, and then makes some brief comments about phenotype models and where particularly in the focus in the context of bacteria and phage and phage interactions. So just a first reminder of what the model is. So we have k strains, closely related strains, labeled i equals 1 to k. These all exist on i islands. Alpha equals 1 to the total number of islands. The islands are all identical. The populations of an island alpha of strain i is n. And the total per island n alpha is equal to big n, which is roughly a constant kept by the overall limits of resources. The frequencies which will use or the fractional abundances are the new i alphas, which are just the ratio on that island to n. So those are the basic dynamical variables that we'll work with. And then there is a small migration rate between islands. It's small compared to the typical growth rates on the islands, which are of order 1, or actually 1 of a root k. But the m is small compared to those. And that's actually, it's important to be relatively small for the behavior of this phase. Then there can be some selective differences between the types. They just overall grow faster than others, or slower than others, the Si. And that's going to have some variance sigma r squared. And mostly I'm going to ignore this, but I'm going to come back and say some things about it towards the end. The interactions that are only within the islands, they don't depend on the island. So there's matrix Vij. We also can have an interaction of a type with itself, a strain with itself, which is much stronger, potentially, which would be then minus a q. But since we're interested in closely related strains, where there's no particular reason for that to be stronger, we're going to mostly set this to be equal to 0. Well, then the crucial parts with the v's, instead the v's are random. So I have to tell you the statistics of the v's. So the average of the v's is 0, as is the average of the s's. And then the variance to be 1. And this just sets the time scale here. So this basically just gives you the time scale. But then the v's are independent, except for correlations across the diagonal. So in particular, the correlation of the effect of what j does to i with the effect of what i does to j. And that has this parameter gamma. And we're going to particularly focused on gamma negative, which is sort of motivated particularly by the predator-prey context. But I'll say something about things in more general. So we're looking at these anti-symmetric correlations, then gamma in this range. And the canonical value for the simulations of things is gamma of minus 0.8 would be a good value. So what is the basic dynamics then? So here's my dynamics. So there's the overall growth rate. That'll depend on growth minus death. That'll depend on the selective differences. It'll be a niche interaction if we include it, but we're mostly not. We mostly ignore that. And then there's the interaction with all the others on the same island. And then there is this piece, which is the Lagrange multiplier, whose role is to keep the total n on that island being fixed at n. So it acts on each island separately and it's going to be in transient things, at least it's going to be dependent on time. And then there is the part from the migration. And the crucial feature in the migration is that the migration comes from all islands to all other islands. So the total migration in them will come from all of the other islands, be the sum over the same strain on all of the other islands, which is basically going to be the island average in the limit that I is very large. We're going to take entirely deterministic equations with no stochasticity. There is a possibility of local extinctions, we'll sort of add that in, if the news, since they're the fraction of population, become less than one over n, so this is less than one individual, but they can get repopulated by migration from the others. And so we can understand the effects of this, but we're initially not going to include that and take n to infinity. So I showed last time from the simulations that what the system goes into is it goes into a spatiotemporally chaotic phase. Some fraction of all the strains go globally extinct, it turns out to be a small fraction, usually a small fraction depends on parameters, they go globally extinct, but the surviving ones that persist, they go into a chaotic steady state after some initial transients, and the crucial part of that chaos is it's de-synchronized across the islands. And the reason is if you have chaotic dynamics on two islands, there's a positive gap on an exponent and the coupling between them from the migration is relatively small, then they will tend to de-synchronize and that happens all except the migration is quite large. So we've got this phase and I'll just show the one of the figures that showed from last time. So this is one type, one strain across 10 of the islands and this is plotting on a log scale and that's because the natural bouncing around is on the log scale because the growth rates and death rates vary. So this is bouncing around, they can bloom up to high abundances here, so they all have these, each of them has a bloom. They all bloom up to high abundance, but then mostly go down and sort of hang out down here. Now what stops them going too low will is the input here from the migration, there's sort of called the migration floor, which is this curve here, it's bouncing around because that's an average over all the islands and this is this new bar coming in. So that stops them going too low. They can go extinct, but I've put the extinction threshold down here so that's the condition that the population migration is big enough, the product of those. So that's the extinction threshold that they can go below. And you notice here actually, the global population goes below that but some of them are surviving and they repopulate this island that actually went extinct there locally. Okay, so the crucial part here as far as the qualitative features is that the fluctuations of each island are on a log scale, fluctuating over the log scale here and that log scale is set by the size of the migration. So the range that this goes over here, these fluctuations is a range of log one over M will basically be the range of the fluctuations. So if M is small, they can go over a big range. And oftentimes, in fact, if many of the strains, most of the time they sort of hang out near the migration floor, but they occasionally then bloom up to higher abundances. Now the blooms are high abundances, that of course happens on a linear scale because what comes into this is on a linear scale. So these blooms then will actually dominate the average. If I look at this strain and I look at the average, it's gonna be dominated by the bits when it's way up here. So at any given time, only a small number of islands will dominate, okay? And those crucial then in the blooms, which is of course, it's when it blooms that it can give a migrants into the other islands. When it's down here, it's not gonna give much migration, doesn't matter much, but when it's up here, it's important. When it's up here, of course, also when it has the biggest effects on the other strains. Okay? So this crucial bit is gonna be understanding some of what these blooms are and how they get there and they get there in a very irregular way, as you can see from the wiggles in here. Okay? So these blooms then as they dominate the average, the average of the islands, they also dominate the average over time. So they also dominate the average of new I alpha on the one island of T, average over time. And I'm gonna use this angular brackets to mean average over time. So they'll dominate this and then of course, since all the islands are equivalent, something that overall islands will give you, should give you this island, this island average. Okay? Okay, so how do we understand this behavior? So the goal is to try to understand this behavior. Well, the nice thing here is that we can do a systematic theory of this and the systematic theory is strictly living the limit that the number of strains goes to infinity and the number of islands goes to infinity. Initially, we'll also take the population size to infinity but we can handle that afterwards. And this is coming under the general approach is make things as simple as possible and then add features. And of course, one of the features we want to be able to add is local extinctions. Even with the deterministic dynamics, so it is infinite, I can still have global extinctions. I can have all the strains on all the islands, one strain on all the islands just keeping coming down and die out. So I can still have the global extinctions. Okay, so how do we do this? Well, what we do is I mentioned at the end of last time is we focus on one type, one strain on one island, so that's alternating between strains and types. So one strain, strain I on one island. And since the statistics are independent of the island, I'm just gonna drop the alpha index and there and call it new I. So the dynamics of this has several parts. So first it's got the sort of obvious things in its growth rate. So here it's growth rate, it's got the SI, it's got this interaction with itself if we take that into account. But then it has, and then it's got the Lagrange multiplier which keeps the population on the island constant, okay? And then it's got the migration coming from all the other islands and the migration out from there. But then the effects of all of the interactions with all the others are coming from these two pieces here. And I've shown this piece with a double minus sign because gamma is negative. So this overall sign will be negative there. Okay, so what are these? Okay, so the way you could try to understand this is we add a type and we add one type, a new type. And I'm gonna call that a type zero just to distinguish it from the others. So we put that in and we ask, what are the effects of the others? Okay, so the effects of the others, all the other ones on the same islands, of the others, what will they do? Well, the effects of the others will give it an effect in its growth rate, which is going to be, it's gonna be where the zeta naught is gonna come from. Okay, and that's gonna be the sum over all the other strains of v naught j times new j of t. So that's going to be the sum of the others. And this is going to be something which is gonna be approximately Gaussian. It's the sum of a large number of things. The v's are independent. So this is a large number of things. This is gonna be approximately, this is gonna be approximately Gaussian. Okay, and it's gonna be Gaussian and it's gonna have some correlations. So it's gonna have mean zero. So the average is gonna be zero equals zero. And then it's gonna have some covariance. So it's gonna have a covariance which we're gonna call c of t and t prime, okay? Which is going to be the average of the zeta of t, zeta of t prime, okay? Now, zeta of course, because we've got the, this is coming in for the zeros, that's going to have a zero in it. But the statistics of it is gonna be the same for all of them. Okay, but we'll have to think carefully about what the effects of the particular type are. So that's the one type, okay? So that's this part here, how it's interacting, okay? But then there is a really crucial part and the really crucial part is the feedback of this on the other types, on the other strains, okay? Where is that feedback going to come from? So I now imagine that this new is now growing. So it has some time dependence and we're gonna want to look at its effects on the presence of the future. So we've got new, the new naught, which is going to have some time dependence and I'm gonna look at this, say in the past, I'm gonna call that t prime. And what does that do? Well, what this will give rise to, this will give rise to an extra force on the other islands. So this will give rise to on each of the other islands, it will give some delta jeta, j, zeta j of that time. Okay, and what's that going to be? Well, what that's going to be, that's going to be equal to then the just the sum, sorry, the vj naught is coming from this island times new naught of t prime, okay? So that's going to be the zeta. But what's that gonna do? Well, that's going to change the new j only of the other stream. So this is gonna then result in a change, this is gonna result in a change, okay? Delta new j, okay? And we're interested in what that change can be at later times. So this of course can be at later times there with the t bigger than t prime. So that's its effect, okay? Well, what will its effect be? Well, roughly speaking, the effect of each one on each of the others is small because they're a very large number and there's a total number, the effect of each one is small. So we can approximate this effect here of this delta new j, is it gonna be the extra force, which is the zeta j times delta, the derivative of new j at t with respect to zeta j, t prime, okay? So this is like the response, this is the response of j to the changing the force on it, changing the zeta on it, right? Because the other ones, this is the force, the other ones are feeling, we've now got the other ones that are feeling they are those and so they'll get this extra force, okay? So this is the effects of there. But what does this do? This is now change the new j. So the changing of the new j then, that'll give us a change back of the, an extra sort of force on new naught. But now this force on new naught is gonna be at this later time. Well, how is it gonna do that? Well, that's just got the vj naught. We've now just got to sum this up here, okay? So we've now got this is v naught j. So it's the feedback back coming on this and then we've got the sum overall all j, okay? So that's our extra force that we're going to get. That's the extra force back on the, on new naught, okay? So what is that term? This term is going to have some average value. Why is it gonna have an average value? Because we've got these two things here, which are correlated. This one is correlated with that. Then it's correlated exactly with this parameter gamma, okay? So what is this going to, this is gonna be in the limit of large numbers then, this is going to be just the sum on j. Okay, of, now we're gonna get this parameter gamma coming from that averaging times this average of new j with respect to zeta j, t and p prime, okay? And this is then gonna be times, and this is gonna be then our quantity, which is just gonna be r of t, t prime, okay? Times the new naught at, sorry, the New York City, well, I've got already there, okay? So this is the quantity here. What this is here is this is this whole bit coming in here. So this is this whole part. All of that is gonna come down and give us the r, okay? And sorry, I've got the gamma in the r, the r there. So I've got the gamma times the r, okay? So what is the r? r is then gonna have to be equal to this, okay? So we now have a self-consistency condition. So the crucial part here is we have the self-consistency conditions, self-consistency for this approximation. And that's that statistically all the strains are equivalent. So there's nothing special about this one that I called zero. So the self-consistency has to be that this r, so here I applied the r into this equation. I applied the r. I applied the zeta. The zeta had some correlations. The r was coming from a response, right? So this r here is a response. So that's the response, okay? And the c here is the correlations, okay? So those have to be determined self-consistently and those are the zeta. And of course then I have to determine the self-consistently. So I have to find the statistics of each of the new J, statistics for each of those. I have to compute the correlations and then the self-consistency is that c of t and t prime is going to be the sum on J of the average of new J of t, new J of t prime, okay? And this is then the average over all of the noise. So this is really the average over all of the effects of the zeta J. Okay, so that's the correlation function. And then I have similar for the response function which I've already written out in terms of this which is the sum on J of the new t prime. And then obviously you have to integrate those effects over all previous times, okay? So the effects in here, this is the effects of all the others. It's a Gaussian random variable with correlations in it. I've got this response from the feedback of i on the other types and back again on the on type i. This is of course time lagged. There can be general time lag there. So it's integral all the way up to t and the coefficient of that coming from the correlations in the v's of the effect of i on J and the effect J on back on i has this coefficient here minus gamma. So this is a negative effect because it's a feedback effect that stops new getting large, okay? So this effect here, this effect here really is this kill the winner, kill the winner effect, okay? The effect on the others is when the new i is big. So when new i gets large that has the effects on the others they then do well and they give the feedback on this, okay? So that brings it back down again and that's responsible for this dynamics up here. That's crucially responsible for this part here which is the turnaround and stops them getting too big. If I do have the queue as well the queue also of course stops them getting too big but for most of the time at least we're just gonna ignore this term and it doesn't matter unless it's particularly large. It has to be larger than the other effects to matter and I say that's corresponding to assuming niches which we specifically don't want to do, okay? So this is the basic structure then of the dynamical mean field theory. Just one more second. This is the basic structure of the dynamical mean field theory and now our task is a simple task, seemingly simple task that one has to figure out then self-consistently these two functions and so we have to get these once we figure out the new we assume something and then we solve, okay? Then we've got an additional self-consistency we then with the migration. So if I then plus adding the migration, okay? So how do I do that? I assume some new I bar general will be dependent on T so I assume some new I bar then I compute I get the actual new I is coming out from that and then this has to be equal to that so I compute the new and then I have to make it self-consistent. So the time average of the new, well, sorry, that's actually the average of the islands the one over I times the sum on the islands of new I has to be equal to new bar. Okay, of course I need to adjust that so I assume new bar I get a new I and then I have to adjust until I get this. So what's gonna happen? Well, there's gonna be some news which go extinct. So sometimes we will get that this will go extinct and so I'll have some fraction here where it's extinct. So some strains is no solution which is bigger those and those go globally extinct. Sorry, Danny, can I make a question? So the new I are broadly distributed. You said that at any given time there is one few of them that will dominate the sun, right? Yes, yes. Well, is this approach does, is there any problem with this approach where you take, where you assume essentially that things are self-hubberaging? Yes, yes. So the condition that one needs is that the number which are large at any time is big and the number which is large at any time is basically gonna be the total number divided by this factor because they're roughly uniform on that scale. So the condition we actually need is not that K be much bigger than one but K be much bigger than this parameter here, this log. Okay, but this is a modestly large parameter. In practice, with in modus K that once can see this a phase. So like I've got here in practice it often is only dominated by a few of them. Okay, so this is not in the regime where the mean field theory is strictly valid and associated with that also, we have these fluctuations because the number of islands is not very large. So this is showing from modus numbers, one can do it for larger numbers and there's ways of trying to get some convergence numerical. So that's a good point is that strictly speaking, I did have many large at the same time. However, it turns out they turn around fast enough that only having a small number large at the same time, the behavior is essentially the same. But that's one of those things that we can put in afterwards and understand that. Thank you, that's a good important question. That was a raised hand, my argument. Oh yeah, I guess, could you remind me what you took to infinity in terms of- So the things I took to infinity is K to infinity. So that gets around this problem that Mateo just raised. That means I always have a large number affecting all the others. And that's coming in here, the fact that I've got to sum over large number of roughly independent things. And then I also took the number of islands to infinity. And the reason I can do that is then I can treat the self consistently of the number of islands. I can treat this as something which doesn't depend much on the islands. These are gonna be roughly independent of each other because of the chaos, the uncorrelated chaos. And so I'll get something which is well-behaved average there. But again, one can, I'll make something you can go by number of islands. So then the Lagrange multiplier then, at what level is it keeping the- Okay, so I also have to adjust the epsilon. Okay, I need to adjust that as well. That I can do as I go along with the dynamics. It's part of the same thing. So I better put that also in here. I'm finding the new event. So I get this and I also get the epsilon of T. Sorry for not having said that. I'm that part. Thank you. The epsilon on T will be, because again, the large number of types and the statistics being the same, the epsilon T can be the same on each island after fancy. So I'm now gonna simplify things. So some strains have gone globally extinct. And then the assumption is that the rest go to a statistical steady state. Okay, the statistical steady state. And that means that for example, the correlation function will just be a function of T minus T prime. Okay, the response function will also just be response function of T minus T prime. Okay, the epsilon will be approximately constant. So I'll have those simplifications and each new bar, it'll still depend on I, okay? This will also go to a constant, but depends on I, depends on the strain. Okay, but it'll lose its time dependence. So now I've got a time translation variant problem and I can try to solve that, okay? Now, I would just like to make a side note for people who've seen the dynamics in field theory and spin glass context or other contexts. Usually in people, the situation people do, you can take these correlation response function, you can go back in and work it out and you can directly get a self consistent equation for the correlation response function. And once you've done that, you no longer need to do the stochastic dynamics. Here, we don't have that behavior. Here, you have to do the full stochastic dynamics. You have to assume as data, you have to do the stochastic dynamics, understand the statistics of the new, get the average of it, get the correlations of it. These quantities here, both have long tails in time. This has a long tail in time. This has long tails in time. And you have to work through those self-consistency and that's what's hard, okay? So the real challenge here is sort of the applied math problem now of going and trying to understand this self-consistency. So that's done in detail in this BNAS paper, okay? So we have to assume we get a steady state and then we work out all of these things self-consistency, okay? So what I wanna just do is I wanna do a bit of the heuristics to give a flavor of that. And since the distributions are on the board, the natural thing is to look at the log variables, to look on a log scale, okay? So I'm gonna define li is going to be the log of new i. Okay, so l actually goes negative because the new is bounded by one. And then I can write that I've got the log i dot here. So this is now just going up and down. So it has this part coming from the zeta i of t minus the epsilon, which is roughly a constant. And then it has this feedback effect of the r times up to t, this is now t minus t prime of the n, well, what is the n? The n is going to be e to the now l at an earlier time, right? And there you see the exponential weighting if you like in terms of the natural variables, which are the l's. Okay, then there's gonna be part, the other part was just gonna be the minus the migration out. But then the important thing is that the migration in, the migration in has this average coming in here, which is now just a constant, but now of course that's divided by n. So that means it has an e to the minus l here, okay? Has an e to the minus l of t. Okay, so these are all functions of t. Has an e to the minus l of t. So what does this do? This one here cuts it off at the top end, right? When l gets large, it cuts it off. This one here gives you a floor that you're not likely to go down much below this, okay? So this term here keeps l usually being, bigger than log one over m times this new i bar, okay? So what is that? That's this exactly in this picture here. That's exactly what this floor is. So that's this floor which is set by this. We have to adjust that floor itself consistently. This is a not very good type and I'll go up and come down. Oh, and I should, let me put in the, leave in the si in here, okay? So what do I want to do? Well, I wanna divide this into two things. I want to look at this and then it's gonna have some average value, okay? So in the steady state, that's gonna have some average value but that's gonna depend on i. So I have a quantity then which I'm gonna call psi i, okay? And that's gonna have several parts. It's gonna be si plus the average over time. This is gonna be the average over time. The minus the epsilon, okay? And then of course I've also got a part of the zeta which isn't average over time. So I've got an extra part which doesn't average. So I can write zeta is the average zeta plus some eta where the average of eta equals zero, okay? So I have some other part there and the eta has the correlations associated with the remainder part of the, okay? So now this will depend upon the i and this I'm gonna call the bias, okay? What is that? Well, in connection of things that Stefano Alacina talked about, well, this is just the invasion eigenvalue from small numbers. Why is that? Well, if I look back up at my equation here, if I look when nu is small and they've been small in the past, this is what's gonna determine how it invades, okay? So this is exactly the invasion eigenvalue, okay? So this is the invasion eigenvalue. Now the surprising thing is, well, the not surprising thing is if minus xi i, if it's strongly negative, if it's strongly negative, they'll go extinct, okay? But xi i can be less than zero but bigger than some critical value, which is negative. It can be in that range. It can still persist, okay? So a crucial things here is even things that are biased downwards on average, even things that are biased downwards on average can persist and in fact, they can be biased down quite strongly on average and the reason that they persist, sorry, the reason that they persist here is associated with what this form is here of what comes on. So even though on average, they're going downward, they're being pushed down towards there, they've got this effect here, which stops them, okay? But in order for that to work, they have to burst upwards. So they have to, even though they hang out down here, if the xi is quite strongly negative, they hang out down here, but occasionally they burst upwards, those blooms are crucial. Without those blooms, you don't survive, okay? So you need to keep the new i bar up. You have to bloom and in order for this to happen, they need to have occasional blooms, okay? And those blooms then is what dominates the new i. Okay, so you work after work out the statistics of those blooms, that's subtle. And the reason that subtle is that these R and Cs have long range correlations in time. So the R and the C here and the correlation function, both have long range correlations in time. They decay with a two thirds exponent over some range. They have two other different regimes as well. And life gets very complicated. But the crucial thing is when that's to understand the statistics of these blooms, okay? And that's a rare event calculation. And they say this is coming from, this is gonna be the average of E to the LI, right? And so it's the exponentially weighted average is dominated by when the Ls are almost close to zero. So this is now the problem that one has to solve. So I'm not gonna go into more of that. This gives you a sort of qualitative picture of the behavior. It turns out that most of them actually persist with most of the parameter range unless the migration gets large, you actually get most of them persisting. Only a few of them go extinct. Most of them persist. You can work out what this is, how it depends on the parameters at least roughly. So this is all doing asymptotics. You just almost know things you can write down exactly. You can write down sort of bounds on things which really give you useful results. But the crucial thing is understanding statistics of these blooms. The crucial thing is seeing how they give rise to long-time correlations in the response and correlation functions, okay? So that's the basic behavior, okay? So I just want to, I think, add a page here. Okay, so what we wanna just ask about is the robustness of this phase, okay? So Roy, Felix Roy and collaborators have looked at this for gamma equals zero, so independent interactions and Q being positive but not too large. Well, it's order K, root K, but not too large, okay? And they find similar behavior. There's some differences associated with what happens, how they turn around when they get large. So if something comes up and when it gets large, it's turned around by the Q rather than by the response, but rather than by the feedback, but the behavior is qualitatively similar, okay? So it seems as if this actually applied over very large parts of the sort of phase diagram in the basic model, okay? So that's one part, but another part of looking at the robustness is what happens if I have finite number of islands, okay? So if it's finite N, finite population of each island, okay? Then if MN, so the total migration into each island is large, okay, then it's still okay. So some extra ones go extinct, but most of the strains still persist, okay? A few extras go extinct if the new bar falls below one over N, okay? If I had in the figure, okay? Finite number of islands, I'm finding a number of islands as well. Then what you do is you get to the survival time, the survival time in this population goes as E to the I, a number of islands, exponentially long, divided by some characteristic scale, which depends upon the bias of that type. It depends on log M and it depends on log N. And again, figuring out what this is takes quite a bit of work, okay? And one can check that numerically 10 islands and 100 types, I've forgotten what it is, 80 or something like that have survived very long times. And some of them will survive as exponentially long times as the number of islands get large. So even though this is asymptotics, and in fact, it's asymptotics in log variables, in some sense, it's asymptotics in log variables, it turns out that it's very robust and so works over a large range. We could add other features, we could add in some extra environmental stochasticity, we could add slight differences from island to island, Felix and company have looked a bit at that. And again, the behavior seems to persist. So we really seem to have this robust phase for an assembled community with correlations in the interactions which are not strongly competitive, right? Gamma being positive, it's gone for more competitive interactions. This certainly is gonna persist for some positive gamma, we may persist in some sense all the way up to gamma just below one, but that we don't know yet. So we don't know how persistent this will be if I put in say interactions via chemicals in the environment. So that's still a lot of questions associated with that. So I want to talk in the last bit about whether this community can evolve. But let me just pause here if there are questions at this stage. And I say, I certainly don't expect you to understand this in detail, but to understand the spirit and how one does the calculations and sort of the heuristics. So let's now ask the absolutely crucial question. We assume it was an assembled community, we let things go extinct at state, but we wanna know is this phase stable to evolution? Okay, and also can evolution give rise to this? So how do I want to evolve it? I want to take just choose some strain I. Okay, so we're gonna first look at this, we're gonna look at slow evolution. That's the hardest case as far as the things persisting. So we're gonna add one mutant at a time, and we're gonna take type I, and it's gonna mutate to some type I tilde. So this is the mutant, it has a given parent, and these will be correlated in some way. The S's and the V's will be correlated in some way with a strength row. So root one minus row squared is essentially like the difference between them. Between the parent and the mutant. Now it has to do these correlations in the right way to keep the sort of cystics and so on in there. This bit, we have some analytic understanding of it mostly numerical at this stage. Okay, so this is now looking at the, to what happens. So I start with some number of types here. I start with number of some number of strains initially here. Those some fraction go extinct. So there's the rapid extinction here on the ecological time scales. Then we let it equilibrate and then we add one more. Okay, so we've had this mutant. We let it then equilibrate. That'll drive some extinctions, possibly some new extinctions. And of course a crucial thing, if we're adding one, sometimes it'll drive the parent out, but much of the time the parent and the offspring coexist and the mutant coexist. They're slightly different from each other. And then I pick another random parent. I do the same thing again. So this is looking what happens. And if you start with a small number of types, you tend to get that it goes extinct. If you start with an intermediate number, it sort of fluctuates around and then starts going up. And if you start with larger numbers, it just goes up. So this suggests that there's sort of roughly a threshold at which you need a threshold in the complexity that you need before it starts to take off. Now where this is, it will depend upon the migration rate. It depends on much more details about the model. We don't understand this quantitatively, we want to understand qualitatively what's associated with it. So when you go in here, the number of types ends up plummet. If you start with small numbers, you basically it's exponentially rare in a small of large exponential factor in order to get up and get going. So this behavior one won't see unless one sort of gets the community going in some way. But one could do that in various ways we haven't explored in detail. There's a very subtle thing, and this is what a detailed amount of one is working on as to how the invasions actually occur. So how does a new type come in? And that turns out to be rather hard. And if it's too close to its parent, it's even harder. For this, we haven't explored the details of that yet, but say that we're working on now. So what happens? Is this something which is persistent? Well, here's now looking on longer time scales. So this is of course, sometimes it doesn't invade at all, I should say. Sometimes the mutant doesn't come in. So we collaborate often, the parent mutant sometimes, or even often, the mutant fails and doesn't invade. And sometimes it does. If does, sometimes it replaces the parent, can replace the parent. So what happens then? Well, depending on what the correlation is, so this is now putting in independent ones that's assembling the community more. This is very highly correlated ones, tiny differences between the parent and offspring. There, you don't really see much happening yet. Mostly happens here is that the mutants replace the parent, but as soon as you get, you decrease the correlations by a tiny bit, you started getting systematically going upwards. So you get systematically growth in the diversity of the community. And typically in this regime here, roughly for each one that you add, you lose a half. And so you just go up at a steady rate, actually for these ones, you go even faster than that, okay? So this tends to come up fast, you get the community which gets richer and richer as it goes along. However, this here, we assume there were no generalist mutations. What does that mean? You can do better in general by having all of your V's larger or having all of V's against you being less negative, okay? But the bigger the population, the number of the community is the less likely you are to have that happen, okay? So this has no generalist mutations. What do I mean? We mean that the SIs, there's no SIs, the SIs are all zero, okay? SIs would just mean that you'd better in general, if your SIs bigger than your parent, you'd do better than your parent in general, okay? So what happens now if we put in generalist mutations, okay? So we're gonna do this, but we're gonna make a variance of the sigmas. So the variance of the sigmas is gonna be much less than one. That's the assumption initially that things are already pretty well adapted. They're pretty well adapted. But what then happens is you can still go out into the tail. So if I look at the distribution of the SIs, so I look at the distribution of the SIs here, distribution of the SIs. So this will save some Gaussian distribution initially, okay? Well, even very early on, if I look for the ones that survive, I'll tend to not, most of these ones down here won't survive. So they'll go extinct. And as I go up, I will start to get, as I evolve and start to add types, I'll start to get that this distribution will tend to concentrate more and more towards the tail. As I go up further, it'll concentrate even more towards the tail and it'll keep creeping up towards the tail. So this I go with successive invasions. I go along and I start pushing it out towards the tail. What happens then is one can see that and you can look at the correlations here. If you look at the mean of the SIs that come in, so this is the generalist mutations, then that goes gradually upwards. You push further and further out into the, further and further out into the tail, okay? The process also gets slower and slower because if you're an S here, if you find a smaller S, most of the time your mutant will have a smaller S, right? So once you get up in this regime here, okay? The most the mutants, in fact, even the successful mutants. So the S of the mutant will be less than the S of the parent. Nevertheless, if it has good Vs, there's good interactions, it can invade, okay? But generally it's more likely to invade if it's S is larger because it's got an overall high average growth rate or higher bias. So what happens in this case is it continues to diversify. It continues to diversify, it just gets slower and slower. It's harder and harder to invade. It slows down, but we have an analytic understanding of this and for at least for the Gaussian tails here, it should keep on growing indefinitely but just getting more gradually slower and slower. Okay, so even if you allow the generalist mutations, this phase can still exist. It evolves more slowly. It gets harder and harder to invade. That's of course a general property. The things will evolve for the same in constant conditions. We'll tend to get harder for new things to come in. Interestingly, if you didn't put specifically these generalist mutations in, that doesn't really matter. This just keeps on going up and the statistics don't change substantially. And the reason sort of is that there are so many ways to do well up here that it doesn't really gain to sort of do better overall. You do better about whichever the current ones are. So it keeps going up in a steady rate, a substantial fraction of all of the invaders can come in. When I get down here, most invaders fail. So most mutants fail, but you get some and this will continue to grow. And this only grows logarithmically in logarithm of the entire time. Could I ask a question? Yes. So on some of these speakers, the x-axis is successful invasions and some of them is time. So is time the same as attempted invasions? Yeah, thank you. So this time here, this is proportional to attempted invasions. And we haven't put in all the subtleties which you've thought about of that invasion process. We allow them to come in at substantial numbers to make it easier to run things at a reasonable time. So this is the number of attempted invasions. Sorry, I should have said that. We're not allowing reinvasion. So if something goes extinct, it stays extinct. So if you drive extinctions, then they'll stay extinct. Other questions here? So if the last thing I want to talk about, and again, there's even less of just a tiny bit of a flavor on it, is the question about the interactions. Okay, so everything we've done so far, everything we've done so far is that the phenotype of type I is defined by the interactions. So by the whole set of VIJ with all of the others and the insimily. So this is the phenotype of type I. So that's a very weird thing to do. We can't just, we shouldn't be defining our phenotype by interactions. We should define it by some properties of the organism. Okay, so we want to look at phenotype models where the interactions that are turned by property of the organisms. Okay, so for this, I'm going to go explicitly back to the bacterium phage mole. So I have now bacterial strains, a bunch of bacterial strains indexed by I, phage strains indexed here by ML, and the populations of the bacteria in the phage, and the bacteria dynamics, growth rate, killing by the phages, so the Hs are all positive, competition with the other bacteria, uniform competition known niche-like interactions. The phages will die without food. They will then grow with the effects of bacteria, and the specificity to the extent that there is contained in these, okay? And these then will have some average value. This is one species of phage, one species of bacteria, right? So these are both of them, one species of each, and just strains. So they're not specialists. They could evolve to be specialists, but we don't start off with them being specialists. So what will the correlations look like? Will there be some average value of the F? And then say it's only slightly different from each other, there'll be some small variations, Delta F and Delta Rage, okay? And those will be strongly correlated, okay? Now where do we think these are coming from? Well, this is now where I'm gonna put in the phenotype, okay? So I'm gonna put a D-dimensional phenotype, okay? And if you like crudely, this is the A is labeling the amino acids, a crude way in a receptor. So the bacteria has a receptor, the phage has a tail, and the tail binds to the receptor, and depending on how well it binds, that'll determine it. So we're doing the absolutely simplest thing here. There's only one phenotypic property of each. The bacteria has a receptor, the phage has a tail, that's a D-dimensional thing, so it's just a string of numbers. The binding strength is just the, I'm defining these in a way that they're energies, so it's just the binding between these. So this is the binding strength of the phage tail of type L to the bacteria of type I. And then I'm gonna assume that what their interactions do, what this interaction do, they will give rise through some function, just some function, the way that the bacteria harm the phage and the way that the phage feed on the bacteria, okay? So the simplest thing to do would be to assume that it's a linear function. So the simplest is if this is linear, okay? So if this is linear, linear, then I get a low rank matrix, so my rate matrix of interactions, my matrix of interactions is gonna end up being low rank, okay? And then you can't get much diversity. It's gonna be rank D, limits the diversity. But what if I put in something which is somewhat more biologically motivated? So if I look at what the effects of the phage are on the bacteria, if they don't bind much, it doesn't do much. If they bind strongly, if it does something, and if they bind more strongly, it sort of saturates once they're being killed, it doesn't matter, okay? For the phage, I want to put in a bit different function. Again, it's gonna have this behavior. It's gonna have similar behavior down here. Of course, it only affects it when the bacteria does, so it'll start coming up, okay? But one can imagine this will keep coming up harder. How well it binds might more strongly affect the phage. It can keep doing better, even once the bacteria has already died, because it can get in more effectively and so on and maybe produce more. So these functions are different from each other. But the fact that I've both got functions, they both depend on the same thing. So they both depend on the Gs, right? This implies that these are correlated. Of course, it forces correlations in there. But it's correlations just coming from this phenotype and I can look at it being in there, okay? So the only thing I know so far, roughly, is that if D is bigger than about six, this is just numerically, D is bigger than around six, so very low dimensional phenotypes. And for some functions, H and of G and F of G, at least for some functions, you get diversity continues to grow. So you get the similar behavior to what I showed up here. You get similar behavior to what goes on here. You actually don't get this. You actually get something which is more like this. You get something more like this case. It's sort of bouncing around quite a lot. Sometimes it can have plunges, but it keeps on growing up, okay? So this leads rise to a very interesting conjecture that even with a low-dimensional phenotype, okay? So D is not large. D is going to be some modest number, even with a low-dimensional phenotype and then deterministic interactions that are determined just by those phenotypes that looks as if it can give rise to a continuously increasing phase and it is of this spatiotemporal ecotic phase, okay? So it really is this phase. And in fact, they do tend to do somewhat better as generalists. They, the back phase particularly tend to do somewhat push towards the upper end, but nevertheless, they don't become so generalist that limits the diversity and they don't become particularly specialist either. You can look at the specialist generalist correlation. So what have I done? I hope I've got across some things. So the first thing is the value of trying to look at really simple models to get an idea of what can happen. And if something can happen in a simple model, then I would say that's not so surprising that we might see it in nature. It doesn't mean we understand it, but it means we are not so surprised, okay? There was one, as I looked at the first day, which I've already sort of summarized as far as looking at evolution in a sort of snow scape where you're continuing to change it. But the bit which ties much more into this school generally is the last two days where I've looked at these random lot-cavolterra models. I'm motivated the randomness is coming from strains that were very closely related. So things were sums and differences of the two effects. They were not much different overall. So the SIs were small. There were no niche interactions. I did not assume anything special about the interactions of a strain with its siblings being any stronger on average than its interactions with its 23rd cousins, okay? So that was the basic model. And then in those models, we now have a solid analysis and a very good theoretical understanding of the spatially temporal chaotic phase that can exist in those when the correlations are in this anti-symmetric direction, but it seems to persist more generally if I have a bit of niche interactions or interactions via chemicals which will give rise to that via resources which you're trying to consume. So that's the part which is a solid. And one can think about what its predictability is for nature. One thing I forgot to say in talking about that robustness was that we would really like to add or put it in as a question mark, real spatial structure so that my interaction, my things can't move all over the place. And this is particularly relevant in the ocean where things are getting moved around by turbulence, okay? And so certainly if one wants to make contact with reality one has to think about it. The other bits of this question about it whether it can evolve, it is certainly again possible that these models can evolve higher and higher diversity under what circumstances that tends to get slower and slower we don't know. The specific assumptions about everything, interact with everything that is responsible for some of that slowing down. And if one goes away from that and looks at more sort of hierarchical interactions like I guess Josh or Waites particularly talked about then maybe the diversity can increase more easily. And then the very last bit which is even more speculative in connection with these phenotype models is that you do not need high dimensional nanofenotype to get diversity. So in my sense in which I talked about before of what matters this is the dimension of the nanofenotype. So this was the nanofenotype. It's the only things that matter in this model which is determined how they interact with each other and that's sufficient at least in principle to give rise to increasing diversity. And you don't need to have high dimensions to do that. This should not to be somewhat hard to find if you choose the wrong functions or if you choose functions that maybe is nicer than the one more reasonable than the ones which I used. It's gonna be harder to get it but I think that's a lot to be still understood in this. So there's a huge number of open questions, a lot of interesting directions. Some of those are trying to pursue and there's still more needed on understanding the things that I have talked about. So I'll stop there apologies for going on too long and for going too fast on much of it but I say I hope I at least got some of the flavor across. Thank you. Thank you, thank you very much. Let's see whether there are any further questions from the audience. So you mentioned if you change H of G and F of G that does that increase the dimension needed to have diversity? Well, it does increase the dimension and it's not clear with at least some H and G which sort of are reasonable forms. It's not clear you actually get this diversification at all. What you do is you get things that look more like this. You start with some number, the numbers sort of hang out for a while maybe gets a bit more diverse and then it crashes and it's hard to get it back. And even if you start with bigger numbers you get similar things coming down. By the way, I should mention if you take the perfectly anti-symmetric model on one island, you get the same thing. No matter how big a number you start with the overall tendency to decrease that model is not stable to evolution even within the very perfect anti-symmetric model there. Okay, it's not stable, it tends to go down. We don't have a general understanding of in what cases model will go up what cases will go down. That's like understanding when you take a microscopic model and you ask does it form a superconductor or an insulator or something else? We have no idea how to do that still in physics. But what we do know is if something happens then a whole bunch of other things happen as well. And so here once it sort of starts going up we have some understanding of whether it'll continue when with the generalist mutations we have some understanding that once it starts going up then it'll tend to slow down in a particular way and we can sort of predict how this does that. And so we have some understanding of that with these phenotype models I don't have much understanding. The subtle thing is you have to these can't be perfectly correlated these can't be equal to each other they have to be some different functions so you need sort of sufficient correlations but not too much. I think if you put in a little bit of extra kinds of phenotypes coming in as well so it isn't just one quantities maybe two quantities, two proteins say are important then you can maybe be able to get it more easily. So this is now where I'm going to appeal to biology. Everything that we see is conditional upon evolution and looking back conditional upon evolutionary success over long times things are going to look special they're going to look at the special things that happen. So my feeling is if one has a sort of choice of models or models in different regimes with one of which can give rise to continuing evolution and diversification and the other one can't then the longer term effects of the evolution which are ones which are often the things that just happen to have happened and made the evolution keep going is going to mean that one is going to end up into a phase where things wander around. So the concrete thing I would say is in the random landscape models with just a single strain there I believe there's a family of models also a generic family where you don't have continue evolution with small feedback small ecological feedback. However, if you had such a system it's also much less responsive to environmental changes. It's much more be more likely to be destroyed by environmental changes. If you have one which tends to wander around it'll wander around differently in different locations and it's much more likely to be robust to environmental changes. So my sense is that the long-term evolution will drive the systems in a way that will tend to be the ones that have these kinds of properties. That does not mean there's evolutionary pressures to do that. What it means is that the ones which happened to be successful for very long times and by producing lots of an offspring in the sense of many types of bacteria or many types of insects that those ones are going to be ones that along the way somehow got these properties but it doesn't mean there's necessary evolutionary pressures for it to do that. I was getting more into the philosophical questions. Are there some concrete questions on some of the sort of analysis or the sort of ways of trying to do things? Okay, if you have follow-up questions and a couple of you sent some really good follow-up questions previously I'm happy to answer them by email and they all may also prompt some things that might come up in the discussions at the round table. Thank you. Thank you very much for these and preceding matters. It's been a long day and thank you everybody for following the lecture today and we'll meet again next Monday. Thank you. Have a good week. Bye everyone. Bye.