 Welcome everybody again for this third day of the school. Today we're going to have three speakers, and the first one is Matteo Marsili, who is a professor researcher here at ICTP, who will give a tutorial on stochastic processes. So I'll leave the floor to Matteo, and before we start, just let me remind you a couple of rules. So if you have questions, you can either post them in the chat or you can raise the end with the Zoom feature. And if you're following from YouTube, you can ask questions by posting the question on the YouTube chat and I'll keep track of that. So please, Matteo, thanks for that. Okay, welcome everybody. So in this, this is a tutorial, and the idea of this tutorial is to give you an idea of how you introduce stochastic effects in theory and simulations of individual based models. Okay, so we're going from the ordinary differential equations that you have seen in many of the talks to models that also include stochastic effects. Okay, so the idea is, say, this is just an example of some data from epidemics. This is, I don't remember exactly what the epidemics was, but as you can see, the dynamics of the number of infected in time is not at all a smooth function as you would get from the integration of differential equations for populations. So which means that you have to include other terms. So one, of course, is seasonality. So a term that depends on time explicitly, but the other one is also what I shall call generally noise. And the idea of this lecture is to discuss what type of noise and how this noise comes about. Okay. And in particular, I'm going to discuss demographic noise. Okay, so if you take a simple model like a MSI model, then these are the two differential equations that you have. And generally if you integrate these two equations, you would get just a continuous curve. But if you are interested in a model that is going on in a real population, then the increase in the number of infected individual has to be an integer. So if you look closely at this curve, it should be like a step function. And these steps corresponds to microscopic events where a particular individual in the population get infected. Okay, because the bumps is a susceptible individual about into an infected individual and as a result, it becomes your she becomes infected. Okay. So this is what we want to describe. So what we want to do is to modify these equations of this model in order to take into account this discreteness and these stochastic events. Okay, so let's start from the very simple, simplest case. Let's look at just one individual. And let's imagine that he may get infected in time with a rate which is constant in time. So this probability, RDT, that a new individual will get infected in time interval from t to t plus dt. These events are independent. Okay, so you can represent this on online and these events are represented by these star symbols. And if you plot the number of individuals as a function of time, then each time there is an event, the number of individuals will increase by one. Okay, so how can one simulate this event? If you imagine you want to write a computer code that generates this function i of t, then one idea is to fix a small dt. For every interval with a probability RDT, you will increase i of t by one. Okay, and you can do this for a particular interval t, t plus dt, and then you advance time to t plus dt, you consider the next time interval. And for each interval you should draw just a random number. And if this random number is less than r times dt, then you increase i by one, otherwise you don't. Okay, so now how should you choose this dt? So the idea is that if you choose this dt too large, you may have that more than one individual gets infected in this time dt. So dt should be very, very small. But if dt is very, very small, then your simulation is going to be very small, very slow. Okay, so the question is, can we find a more, say, a smarter way to simulate these problems, to simulate this process. And the idea is to ask, well, how much time, given that I have an event at time t, how much time should I wait for the next event? Okay, so this is a little bit of math. So essentially, if this p0 of dt plus s is the probability that there is no event that occurs in the interval between t and t plus s, then because of time translation invariance, this probability will only depend on s, it will be constant in t, it will be the same for every interval. So this probability can be written as the probability that the time that you have to wait has to be larger than s. Okay, so it will be the integral from s to infinity of the PDF of the probability density of this waiting time. And the next observation is that if you ask this question for two interval, consecutive intervals of length s1 and s2, then it must be that the probability that you don't get infected in an interval of length s1 and s2 is the probability that you don't get infected in the first interval times the probability that you don't get infected in the second interval. Okay, and this should be true for whatever is the size of the intervals. So this requirement implies that this probability, which is a cumulative probability of this waiting time distribution, must be an exponential. Okay, must be e to the minus r times s. And then the PDF is just given by the derivative of this and it's the PDF of the waiting time is just r e to the minus r times s. Okay, so now we have this probability distribution, it is very easy to draw a random variable, which has this probability distribution is just given by this formula here where RAND is a uniform random variable. And so what you can do to simulate your process is just to draw a random variable TW with this PDF, then increase your I of t by one, and then advance time, and then you can repeat it in this way many times. Okay. This is a more efficient way of simulating this process because it requires you to draw fewer random numbers, and it's also exact is not approximate. Okay, so you can also do the same simulation, more efficient with a larger fix the team. If you observe that the probability to get K events in an interval of length, you is given by the Poisson distribution. Okay, so it's all in this case essentially you can keep a DT fixed draw the number of new infected the distribution, and then advanced time by, by you or by tau essentially. Okay. Okay, so these, these all these mathematics that I've been describing is essentially what is called the Poisson process. This is a is a is a process that describes events that occur and that have no memory because essentially in every interval, you can have an event independently of whatever has happened before. So, now, let's go back to our problem. That is that of simulating a population model, like the SI model. So, what happens is these are a little bit more complicated because now we have two population, the susceptible individuals and the infective infected individuals. There are two compartments in our model. So, we can say that for every individual, sorry, for every individual, there is a variable Xi, which is either zero, if he's susceptible or one, if he's infected. So, and the model is, say, is a model where a teacher time interval t t plus dt each susceptible can become infected with a certain rate and each infected can become susceptible. So, now, what is the, however, the, the, the important point is the rate at which a susceptible individual becomes infected depends on how many infected individuals are there, how many susceptible individuals are there. And also on the state of of the individual, of course, in order to be infected, an individual has to be susceptible in order to be become susceptible and individual has to be infected. So, this rate generally depends, depends on the variables Xi. So, this makes the simulation a little bit more complicated. But again, you have the, you can see a dissimulation is a situation where you have n Poisson process, which are, which interact with one another. Okay. And so let us see how we can deal with this situation. Okay, so imagine that we are a time P. And then, and the, the, each individual which is represented by a different line here is in a particular state Xi. And at each time, every individual, every each of these states can change. Okay, so now we can use the idea that we developed before and we can draw weight in time. From the exponential distribution ask ourselves, when is it that the state of Mr. I will change. Okay. And then you can ask yourself, what is the, what is the individual that will change next, this will be the one for which the waiting time for the next event is the smallest. Okay, and then you can simulate this process by just finding what is the, who is the guy with the minimal waiting time you draw waiting time for each of these persons. Then you'll find who is the guy with the minimum waiting time and then you advance to you, you change the state of Mr. I start, and you advance time. Okay, so this is exact. You need to draw and random variables which is which are the waiting times for each of the individuals. Well, you can argue that when you, when you perform the, when you change the state of Mr. I, you don't need to redraw the waiting times for the other people, unless their state has changed. And this is because of the memory less property of the process. Okay, so can we do this simulation, even better. So the idea is that essentially, we can figure out what is the time that we have to wait for the next event to occur, regardless of which is the variable that will change this state. And then you can imagine that we are a time t and where this violet guy change the state. And then at this point, essentially we are interested in understanding, which is the next event with that will take place. We don't need to keep the distinction between the different individuals, we can just deal with this as a single Poisson process. And, and we can just find what is the time that we will have to wait for the next event to occur in this combined Poisson process. Then, once we find this time we can also ask, who is the guy, what is the probability that the guy that changed the state is a particular guy I start. Okay, the way to do this is, is simple. Well, the probability that the minimal time that you have to wait is larger than T is the probability that all the times that you have to wait for the next events, the events on all individuals are all larger than T. And because of this, because these are independent events, then you can factorize this probability, then you know that all these are exponentials. And then you can compute directly, the, what is the probability that the next event will occur later than T. And this is e to the minus this capital RT, but capital R is just the sum of all the rates of all the individual rates. So, you can just draw an exponential distribution, an exponential random variable from using this simple formula. Okay, then the second step can be done easily because if you ask yourself what is the probability that the next guy that will change the state is Mr. I. Then this is the probability that it's his or her, his weight in time is smaller than all the other weight in times. And you can compute this because this is just the expected value on his weight in time that of the probability that all other weight in times are larger. And when you do this integral, you get a very simple answer, which is just the ratio of the rate of the process I divided by the capital R, which is the cumulative rate or the rate for the whole population. Okay, so you see that if you can do a simulation very efficiently in this way by just drawing the time for the when the next event will occur and then drawing which is the individual that will change the state from this probability distribution. And this way of simulating processes like this is called the Gillespie algorithm. So, if you go back to the SI model, then here we are interested in two variables, the number of infected individuals, which is just the sum over I of all these random variable XI. And this is just the number of susceptible is just the total population minus, minus I. So, and the rates are equal to beta times the probability that if I pick, if I meet a person at random it will be infected. This is just I divided by N. And this is the rate for all the people who are susceptible for which XI is equal to zero. And otherwise the rate to recover is just mu. Okay, and this is the rate that applies to all individuals whose state is infected. So, for which XI is equal to one. So essentially the prescription that I told you before is that if you want to simulate the SI model at an individual level, you just throw a waiting time for the next event from a from an expansion distribution with formula by drawing a random number, a uniform random number taking the log and divided by here. And where R here is given by this simple formula and depends on I. And then you draw I star from the formula that I told you before and it is essentially corresponds to advancing time to TW by TW. And to setting the number of infected individual at time t plus TW as the number of infected individual at time t minus one with probability mu R divided by R. This is the probability that yes, sorry, this I think there should be a plus here. And with with probability probability that I get infected the infection more should correspond to it plus one and otherwise it decreases by one. Okay. So if you write a program, then this is essentially it's very easy to write a program and then you can compare what is the difference between the ordinary differential equation the user ordinary differential equation which is the black line and the the curve that you get from this stochastic simulation which is the green one and as you see you get some stochastic fluctuations. Okay, so a couple of questions, a couple of comments, if you don't have questions. So one comment is why can't I use a finite DT and use the fact that this is a Poisson process so I know how many events occur in a time between a Poisson process because it's given by the Poisson distributions I told you where the problem is that when an event occur this changes the rate of other events that can occur. Okay, so one cannot use a finite DT in this case. And so this way of simulating can also be written down so it can be described mathematically by what is called the master equation. And the master equation is the is an equation for the probability of having a high individuals infected at time t the idea of this master equation is that essentially you want to understand if you want to understand how this probability changes in time, then you have to consider what are the events that contribute to increasing this probability and these are the events where either you have one less individual who is infected and then someone gets infected and then you go from I minus one to I or you get I plus one infected individuals and then you get one of them who recovers or otherwise you have infected individuals and some of them either recovers or some other susceptible individual gets infected so you go to I plus one. And notice that here these two terms that describe the transition from I to I plus one I to I minus one are proportional to the probability of having I individuals at time t and they have a minus sign because the probability then decreases. Okay. So, now, of course, in a real situation like the one I told you before there are other stochastic events. In particular, the rates themselves depends on time. So, you should also take this into account. However, typically, the this does not change much the way in which we have been doing our simulation. It's simply because the rates so the time the typical time over which one event occur is a border one of the total population over the size of the total population. We don't expect rates to change that much over such a short, such a short time interval. Okay, so essentially this Gillespie algorithm can be used also in the case where there is a seasonality. There is an interesting observation that has been made by a McCain and co workers. Sometimes ago, and he says to do with the fact that you can get seasonality, even without seasons. But if you look at this master equations solution of this master equation for system system systems. Stationary state with a fixed point and essentially was deterministic evolution. Then instead, if you look at the finite population size, the evolution of a finite population, you find these large stochastic oscillations. Okay, and this is just an effect which is entirely due to the stochastic nature of the system and this has also been applied to epidemics. As you can see in this plot, which is taken by from this paper, also Mercedes was involved. Okay, so I would pause here if there are questions because then I'm going to change a little bit subject. So I think it's a good moment to stop and see if there are questions from the audience. So if you have questions, please use the raise hand or type it in the chat. Yes, please model and mute yourself and ask. Hi, I just tried to understand your slides, the previous slide, when you show about stochastic. Yeah, the previous one. Yeah, this one. So meaning that here we can see the slide in picture. Yeah, this one, meaning that meaning that you can see the discrepancy between prediction of the deterministic model and stochastic models. Because you can see that the stochastic can predict like the solution with like some sort of verdict solution or cycle or something. But it seems like the deterministic part giving you stable state. Right. Exactly. Yes. And the reason is because of the demographic stochasticity. Exactly. Exactly. Exactly. Because you have this demographic stochasticity, you can have this oscillation. So this is a property that depends crucially on the Hessian of the, of the, when you compute the Hessian on the stationary state of the model. So, so, but this is a, if you want to know more, I think it would be ideal if you go to these two references where the problem is discussed in detail. Okay. Hello. Yes. Is it okay? There is a question in the chat from Jordi who asks, is the period of seasonality in the stochastic process the same as the period of the deterministic damping? In this case here, when in this case here, I think I cannot, I don't recall exactly, so I would not like to make statements that are not exact. And, but, so, so this is essentially related. So indeed, if you see in these plots, the, the damping or if you look at the period of the deterministic dynamics is pretty much synchronized with the period of the stochastic oscillation. So, so in, in, in, in this sense, these two things are clearly related. And, and I said, I said, well, if you, if you look at the fixed point condition, then, and you look at the Hessian, then this periodicity here of the deterministic dynamics will be related to that. And this is the periodicity which is explored by this stochastic fluctuations in, in a finite system. Another question by Sir Mass who is asking in the method of adding noise to the master equation, can we add noises with different distribution of different frequencies like pink or white noise? How can we change this kind of parameters in the algorithm? Okay, so the, so if you want to ask, add, so population noise is, is given by, by, is a discrete noise and is, is given by this description that I gave you here. So if you want to add the other type of noise, which is exogenous noise, then this cannot, this can, so like a stochastic forcing with a particular distribution that of course can be done but probably this is more relevant. So we can discuss this better in the next part of the lecture. There is. Oh, sorry. Savina. Yes, if you want to ask a question, please go ahead. Can I ask a question? I understand mathematically that there is an equation, as you said before that we can find the process which accused next. But what does it mean ecologically? Like, how, how we can translate that in what is happening? Like you can, the model you can predict how the distribution is going to go in the future for the next time. I'm sorry if I don't explain myself well. So here this is a model, say the stochastic, the master equation. This is a model where you make a very simple assumption on the dynamics of what is going on. You say that this is a model where is a well mixed population of infected and susceptible they bump on each other and you have this, this process. For this model under this condition you can compute in this way the probability distribution if you are a time T you can compute what will be the probability distribution at a later time. Okay. So the question is how, how much this model describe a real ecological, real ecology and this is something on which you will learn across all the, I mean I think this whole winter school is about that. Okay. Thank you. I mean the tutorial is just to give you a sense of, if you look into this model, what is the mathematics and what is the, what are the arguments that are on the basis of these models. Okay. Yeah, thank you. If there is time. Yes, there is another, I would say there is another question by model so we can go ahead with that and then we Okay, thank you. I got another silly question for you about that picture right when you got this divergent between deterministic and stochastic prediction it seems like the deterministic curve should be like of stable spiral part solution. Is it always the case, can you get that whenever you got like the stable case or something. No, no, you don't get it. So, for example, if you, for example, for a simple as I model as the one I describe you don't get this, this dynamics. You don't get this oscillation so there are specific condition on the on the rates on this rate that make this phenomenon possible. Okay. And again, if you are interested in this I really recommend you go through this very nice papers that discuss this property in detail. Okay, thank you. Okay. Okay. Let's go ahead. Okay, so now what I would like to discuss is probably a more direct way of introducing stochastic effect into the dynamics of population dynamics by adding a term directly to the deterministic equations. Okay, so if you have an equation again for the SI model for the number of infected so this would be the deterministic then so so what you can ask is what is the term that I should add to this model in order to account for population noise. Okay. So, and so this is essentially a subject that goes under the name of stochastic differential equations. The summary is, if you want the two line summary is that the, the effect of noise you think about the effect of noise on a finite and small time window is the accumulation of many, many infinitesimally small events. So essentially the noise that you should consider can be considered as the sum of many, many random variables. So it is essentially described by the central limit theorem. And because of this, the noise, the type of noise that you should add to this stochastic differential equation cannot be anything, but it has a very specific mathematical form which is essentially related to what is called the vener process. Okay. And the main thing that you should remember about this vener process that I call W here is that is different. It's different. So the increment in this stochastic function over a small time interval dt is of the order square root dt. Okay. And this is a very important consequences that really are important to take into account when you want to simulate this process. Okay. So let me go through this in some detail. Okay. So, as I told you, what we want to do is to take a differential equation and add the noise to it. Okay. In this case, I'm using this variable why which is the fraction of infected individual in a population. Okay. So in order to give a meaning to these noise, they say that we should tell, well, how do I integrate this stochastic differential equation so how do I get the value of the random variable why from a time t from the variable at time t zero when I integrate this stochastic differential equation from t zero to t. Okay. So the first part is rather easy because this is just a normal integral. But what is this integral of the noise. So now in order to describe this integral of the noise, you should really think about it as as what you would get if you discretize the time interval between t zero and t into small intervals of size dt. Okay. So the number of intervals is of order t minus dt t minus t zero divided by dt. And this sum is is the sum on the contribution of this noise over this small interval between of size dt. Okay, so now let's think about what what should be the properties of this random variable Xi, which is the integral of the noise over a small time interval of size dt. Okay. So, first of all, if this, if we think this, these are stochastic effects, then it's natural or reasonable to think that these are also independent and identically distributed. Because noise is expected to represent something which we don't know anything. And so if this is so then the sum of these small effects is going to be to obey the central limit theorem. And so we know that this sum is going to be n times where n is the number of intervals times the expected value of this Xi plus square root n times the variance times Gaussian variable. Okay, where the PDF of this variable Z is a Gaussian. Okay, so in our case, I remind you that and the number of intervals or the number of some of variables that we are summing here. It's inversely proportional to dt. And also the expected value of these increments because this is noise is going to be zero. Okay, so now, if you look at this term here, and if you want a finite limit, when dt goes to zero and n goes to infinity, then you should have that the variance of these random variable x time divided by the t should be finite. Okay. Because otherwise you don't get a finite limit. Okay. So then. So this is what essentially defines the winner process. And the idea is that each of these Xi which is the integral of the noise. So if I want to get over a small time interval dt that I call DW should be proportional to the square root of dt times a Gaussian variable Z. Okay, and so I want that if I integrate this noise over a time from T zero to T, then I get the sum which will will be of the order of square root T minus T zero times a Gaussian variable Z. Okay. Okay, so this is the first lesson so that this DW should be proportional to square root dt. And this tells you that when you program and simulate this winner process, then when you advance time, you should rescale things by a square root dt. Okay, or your in your increments should always be a square root dt. So notice here that this defines a curve which is essentially this winner process. And which, which is a fine, which is a well defined limit when dt goes to zero. And this you can see graphically by the fact that I have been running this program with two values of dt, which is 10 to the minus three and 10 to the minus four from time equals zero to one. And this is the function w of t that I've observed that I obtained and you can see that, well, you cannot tell which is which actually I don't even remember which is which. Okay, which one was integrated with the dt which is 10 to the minus three and which one was integrated with the dt which is 10 to the minus four. Okay. Okay, so, so did the winner process, which is this stochastic random curve as is a very interesting object mathematical object and it has this process properties that you can show that it is continuous is a continuous path. And almost surely, and also it has independent increments if you look at what is the increment of the inner process in a time interval, and you know, another time interval that are not overlapping then these are independent. And, and also, as I told you the most important thing that you have to remember is that the differential of these thinner processes proportional to square root dt. And because of this, this is nowhere different. Okay, so we get some creative addition to the slides. Okay, so now let's go back to stochastic differential equations so what we have understood is that if we have a deterministic equation which is the first part, the y is equal to a times the team. Then the noise that we should add is proportional to this vener process or to the differential of this vener process will be can be any function that also may depend on why. Okay. So now, if I want to define how what is the solution of this equation, then I should tell you how you integrate, how you integrate it. Okay. So, so in order to do this, well, you have two parts, one is just a usual integral is all the basic integral that you compute with a usual rules like. Rule or whatever you like. But the other one is is stochastic integral is an integral that also involves DW. Okay. And so you can think, well, okay, I will compute this integral in the same way so define a time tau I discretize in many, many small intervals define a time tau I between inside each of these intervals and just evaluate this function at these points and then some this function times DW and then that's it. Okay. However, if you think a little bit more closely, then what you find out is that the value of this interval integral depends on how you choose the midpoint. Okay. How you choose this tau. In particular, what you can see is that if you look at this particular integral, which is w w d w between T zero and T, then you can define it as the limit as it goes to infinity of this discretize some. And if you take this tau I to be alpha times the endpoint plus one minus alpha times the beginning of the interval, then you get that the result actually depends on how you choose the midpoint. Okay. So it depends on this alpha. Okay, so then in order to give a precise meaning to these integrals, you have to specify how you choose this alpha how you do this. You compute the integrals. Okay, how you integrate differential equations. Okay, and this is called a prescription. Okay, and the most natural prescription is just to take alpha equal to zero, which corresponds to what you do when you do forward integration of stochastic of differential equations. Okay, so, so the consequence of this is that the, if you, if you look at the differential, so the rules that this mathematics for integrals and differential obey a little bit strange, because, for example, if you compute this integral here, you get one part, which is what you would get if you just integrate if w were a normal function, which is just this part here. But don't you get a new part, a new term here, which, which, which just comes from this, this prescription and the fact that w is a stochastic function. More precisely, it comes from the fact that d w is a proportional to square root dt. Okay, so when you compute the differential of a function of w, then this does not only contain, so this differential should be f of w plus d w minus f of w. So this does not, you have to expand f of w plus d w to second order term, not only first order term, because d w square is equal to dt, and then you'll get also this, this term here. Okay, so this is consequences for how you deal with the stochastic differential equations and how you change variables. And because when you change variable, you have to always take into account that d w square is equal to dt. And so there is another deterministic term that comes out in your differential equation as a consequence of the stochastic term. Okay, so you can do many exercises, but let me go back to the SIS model. And so what distance you, so for the SIS model, you can derive a stochastic differential equation, which is, which is this deterministic part. And then the stochastic part, the fact that you have this stochastic part can be derived from looking at what should be the variance of the noise that you add here. And the variance of the noise for each interval, you can see it as this form and depends on dt. So here you should put the square root of the variance of this term here, which is given by this, this object here. Okay, then essentially you can run a simulation just integrating forward this stochastic differential equation. And again, what you get is a, is a part that approximate the deterministic solution, but it has this population noise added to it. Okay, and now how you can relate to these two things that I've been telling you about population noise, the Poisson process and master equation and the stochastic differential equation. There is one way of doing this, which is rather natural and it's called the fun camp and system size expansion. I don't think we have time to go through it. But essentially, what I want to tell you is that just these two different, what I've been telling you are not two different ways of two different stochastic processes is the same stochastic processes discussed in two different ways. Okay. Okay, so here are a couple of references to which are rather accessible for with a minimal mathematical education. So, and where you can find all the things that I've been telling you. Okay, thank you very much. Yes, thank you very much for the very nice tutorial. So I think we have time for a few questions. So there was one in the chat that I can start to ask by pleasure. I think you sort of answered that already. How does the stochastic prescription affect simulation. Does it even affect simulations. It does. It does. So if you, because, say, just to make you give you some intuition. Okay, so when. So, okay, so let me go back to this point here. Okay, so let's take this example here. Okay, so you see, when you take the expected value of this integral here. If DW, if tau i is computed at the beginning of the interval, then DW and W are independent. If you take the expected value of this, this is going to be zero. Is this clear? If instead you take a midpoint, which is not the starting, not the beginning of the interval, you don't take T i as the tau i equal to T i minus one, sorry, which is alpha equal to zero. Then, if you don't take the starting point of the interval, then these two variables here, W and DW are not independent. And so when you take the expected value, this is not going to be zero. So when you integrate, and so this means that when you integrate these stochastic differential equations, so you know, when you have an integration, when you have a normal differential equation, you can choose any midpoint you want. Okay, you can have, say, implicit methods where you essentially solve this, where essentially here you estimate your function X of T at the end of the interval. So this becomes an implicit equation, et cetera, et cetera. So, and for normal differential equation, this makes no difference. Okay, it only changes the accuracy. You can only affect the accuracy with which you can integrate this equation. But if you have stochastic differential equation instead, you should be very careful how you choose the midpoint, how you integrate this equation. So the idea of the ETO prescription is that the noise is at time T is independent of X at time T. Okay, is an effect which is independent of what was developed. Is this clear? Yes, that part is clear. But my question was along how does it show in simulation? How does this prescription show up in simulation? It shows in this way. So say for example, if you, yeah, so when you have a stochastic differential equation like this one, if you take alpha equal to C then you integrate it. You would get, I mean, you would get a contribution, which is like this one. Say for example, if you take alpha equal to one half, alpha equal to one half correspond to a difference prescription, which is called Stratonovich. Okay, then these other prescriptions are such that you don't have these other terms here. Okay, you don't have the second term here. Okay, and so the rule that you apply to stochastic differential equation are the same as the rule that applied to differential equations. Okay, but and so the results are completely different. Also, if you do numerical simulations, the results are, well, not completely, but they are different. Okay. So definitely something you should care about, you should be always specific about what prescription you are using when you are dealing with stochastic differential equation. And I tell you this because also in papers that you can read, this is not always specified. Okay. But if you want your result to be reproducible, you should say what is the prescription that you are using in a stochastic differential equation. Great, so there are a couple of more questions in the chat. So I think we have five minutes more to leave like a few minutes for a break. So next question from Zoret. Can we study bifurcation in a stochastic model? Yes, of course. So now bifurcation, so you always think of, so if you think at a discrete map, then this is like, I don't know, a random map or a logistic map, then you can add noise in different ways. But then if you look at like something like Lawrence system, where you have stochastic differential equations, couple differential equations, then you can also add stochastic noise to that. And I think there are many people that have started these problems. Yes. Great. There is another question about a typical noise by Gianluca, who asks, are there models for which this stochastic differential equation involved a noise term that depends on the state of the system at that time? Say DW depends on X at time t. Yes. So typically, you do this by saying that, well DW is always an independent, at least in the ETO prescription. DW is always the differential of the thinner process. What depends on X at time t is the variance of the noise, is B of X and of X t. So this is the way in which you can build this dependence of X and t of the noise term on the state of the system. Great. So is there any other question? I have a question. Yes, please. You mentioned that if the paper does not say what alpha they used, it's not reproducible. But then I was wondering if I can reverse engineer and given a solution or some characteristics of the solution, I can find what alpha or maybe a distribution of probability of alphas they used. Yes. So let's say, yes, you can, in many cases, infer what is the prescription that has been used also because different communities typically use different prescriptions. So in theoretical physics, many people use Stratonovich prescription and whereas, for example, in epidemiology, the natural prescription is more the ETO prescription. But yes, I mean, it should be possible to figure out what is the prescription used. But I mean, when you write a paper, you should ensure that your results are reproducible. So you should state what is the prescription, essentially. It's on the side of the authors to specify this. Great. So I think it's time to take a short break before the next tutorial. So thanks a lot, Matteo, again for this very nice introduction. And this is, all this introduction is available on YouTube. So again, you can go back and rewatch this lecture, this tutorial as many times as you want. So thank you very much. So now we're going to take this small break and we're going to be divided in breakout rooms as usual and we'll be back in four minutes. So take this opportunity to chat with others or stretch your legs, get a coffee, etc. We will start in about one minute. In the meanwhile, if you're watching the school on YouTube, I remind you that if you want to ask questions, you can use the chat of YouTube and I'll read the question you asked to the speaker. So we are waiting for people to come back from the breakout rooms and then we can start with the next tutorial. Okay, great. So I think that everybody is back to the main room. So it's my pleasure to introduce Zach Miller, who is a PhD students in a college in Devolution at the University of Chicago. So it's very early for him. It's like about seven a.m. So I want to thank him for being with us so early and he will give a tutorial on linear algebra. So please, Zach, if you want to share the presentation and the mute. Thanks. Okay. Well thank you for the introduction and the invitation. Oh yeah so the aim of this hour long tutorial is to give a very brief survey and linear algebra. It's a, you know, a field that has a lot of sort of terminology and kind of foundational baggage and so the point here is that we're not going to be able to go through too many proofs today we're not going to be able to go through too many computations but hopefully to kind of go through the important concepts to be able to use some of these ideas in the biological context. I'm going to sort of start with the real foundations and we're going to sort of work our way up to some more kind of complex calculations and sort of ideas. You know, the beginning might be review for many people but but bear with me and hopefully we'll get to interesting stuff for everyone. And linear algebra I think I'll make my little pitch is definitely the kind of material it shows up everywhere and it shows up in so many different sort of guises that it's really worth seeing it repeatedly thinking about it in different ways so hopefully maybe maybe this will be an opportunity to also get a sort of new perspective if it's content you've seen before. So, any any sort of introduction to linear algebra starts with the idea of a vector. So, for our purposes a vector is an ordered list of numbers so I've shown a few here. And note on notation, often vectors are denoted either with lowercase bold face letters or sometimes with lowercase letters with a little arrow on top, I'm going to stick to bold face letters but I'm showing both here just because you might see it. You might see them written different ways. We think about vectors as a sort of ordered list of numbers. Our vectors will have real or complex numbers as their components so these sort of sort of constituent numbers the entries in the vectors are what we call components of the vector. And we'll for the most part be talking about vectors with real components but sometimes complex components. So, some vectors graphically right so this is the first kind of already we're seeing that there's two ways to look at them. We can think about them sort of as these ordered lists or we can think about them as a kind of arrow in space with a direction and a magnitude. So, I'll go back one second. So this you here is corresponds to this vector here three two, you can see that it's, you know, three in the first coordinate, and two units up and second coordinate the second vector V is this negative one one. Okay, so this is the kind of geometric picture that we've probably all seen before. And then we can do operations with our vectors so vectors of the same size can be added component wise. So, for example, if we want to take that vector you and V and take their sum. We add them just just add the components so three plus negative one becomes two to plus one becomes three. So, the sum of our you plus V becomes a new vector to three. That can also sort of live in the same geometric space. So we think about this is adding the vectors tip to tail, as people like to say. So we take the first vector you and then at its tip we place the tail of the second vector and we get the sum UV. This vector addition. And because it's it basically comes from the addition of the of the scalar components the the real or complex values in here. The the addition follows all the normal rules of the addition of those numbers right so, namely it's commutative distributive associative so we can we can reverse the order of the sum here and we get we get the same vector out. We can also be multiplied by scalars. Right, so we can take a just a regular old real number here and multiply it by the vector you. And what that does is it just multiplies each component of the vector you by that scalar. So here are you goes from being. Two to three halves one. And in the picture here you can see that we're basically scaling its length so when we multiply by a scalar we're stretching the vector, but not changing its direction in space. The only sort of exception when I say not changing the direction is we can sort of flip the vector by multiplying by a negative number right so now we're just taking the vector and it's remaining in the same sort of line in space. So this is just the case of negative one half times you. So combining these two operations, right the component wise addition of vectors of the same size and multiplication by scalars gives us a slightly more general notion of a linear combination, and this is a really foundational concept in linear algebra. Over here on the right, I'm showing that if we take any two vectors, you and V from some set of vectors that we're going to call a vector space and some scalars C one and C two. Then we, we can write very generally the linear combination C one times you plus C two times V, and you can see here the sort of component wise definition of this. So the outcome will always be a new vector w. That's also going to be in our vector space right so we can say that the vector space is closed under linear combination so it's the set of all vectors that we can get by making linear combinations of the vectors we start with. So let's talk a little bit more about linear combination so so again linear combinations generate new vectors from old. Given a collection of vectors so so you give me some vectors, the set of all other vectors that I can express as linear combinations of those is called the span. So this is, this is another important sort of idea that we're going to rely on a lot throughout this tutorial. Just to kind of get a sense I mean to get a feel for using this term. Here we say that w is in the span of you and me right so if I start with these two vectors you and V. I can produce a new vector w by taking a sort of weight a linear combination of you and be a sort of weighted weighted sum of you and me. And what that sort of looks like is I, again I start with my UNV. Now I multiply them both by scalars so I stretch via a little bit I shrink you a little bit, and then I take these new scaled vectors and I sum them and I can produce w. Okay. So, because w is expressible as linear combination of you and me it's said to be in the span of you and me. So this is another example of a vector in the span of UNV, just to make clear that, you know, we can, we can use negative numbers to make these vectors point in the other direction. So the, I mean this T is again just produced by scaling and then summing UNV. And we can also sort of strangely use span as a verb as well as a noun so we often sometimes say things like UNV span the entire plane so. I'll kind of, you know, get to skip to the punch line here and say that I show you w and T but really we can produce any vector that we want that that can be drawn in this plane by taking combinations of you and me. And so this leads us to another really important idea which is the is this idea of linear independence so if if we take a set of vectors s. And none of the vectors in that set can be written as a linear combination of the others, then we say that s is linearly independent. If that's not the case, then then s is linearly dependent linearly dependent so if there is a vector, or in that case there might be many vectors in s that can be written as a linear combination of other vectors in s. Then the set is linearly dependent so a few more examples so in this picture. If we consider the set just of UNV, this set of those two vectors is linearly independent. Okay, and that's because if we take just V on its own. The only way to sort of take linear combinations of V by itself are just to scale it so all we could do is sort of shrink or stretch this V, we can ever produce you by doing that and vice versa. The set UNW is linearly independent for the sort of the same reason. But the set UVW is linearly dependent now, and that's because as we as we just saw we can produce W as a linear combination of V and you. Okay. So we're going to see this idea again of linear independence again and again. So the linear independence leads right away to another concept. That's that's the concept of a basis. So a linearly independent set of vectors that spans a vector space. So we specify some some vector space for example the plane here. And if we have a set of vectors that spans it so they can generate through their linear combinations they can generate any other vector in the plane. And they're linearly linearly independent and they form a basis for that space okay and the and the idea of a basis is sort of an idea of a kind of minimally sufficient set of vectors right so the, the, the spanning part tells us that these vectors are enough to generate the entire space under consideration. So the linearly linear independence part tells us that none of them are redundant. So if we add more vectors and this step becomes linearly dependent. That means basically we have more vectors that we need to, to span that space. So as some examples here, the set UV is a basis for the plane for our two so I use this kind of fancy art to denote the real numbers and the two to say basically the plane, the set of pairs of real numbers. So UVW is also a basis for our two UVW is not a basis for our two. And the reason is because UVW well it does span our two so it can, we can use that set to generate any vectors in our two. It's a linearly dependent set right so it's not minimal it's it's sort of has redundant vectors in it. Clearly not a basis for our two because of course you can only generate we can only use you to generate vectors that lie sort of on this line in the direction of view so we can't generate any vector in in the plane. So the idea of a basis is actually one that is, I think, in many ways kind of familiar. So, even if you haven't really looked at linear algebra in this sort of formal way before. Anytime we sort of work in a, in a coordinate system. We are usually thinking about a basis called the standard basis. Even the kind of axes that we often draw for like a 2D or 3D space. We can think of as basis vectors so this this is the standard basis. We are it's often convenient to work sort of in reference to the standard basis. So, here we often use E I to denote the standard basis so E one is the vector with just a one and the first component zeros everywhere else you to is the vector with a one and the second component zeros everywhere else. For whatever kind of space that we might want to look at and so in this picture right these are just vectors of length one in the direction of the usual sort of coordinate axes. And then when we write. We do sort of Euclidean plotting right we think about kind of ordered pairs of numbers like AB. And we can think about these just as linear combinations of the standard basis factors right so a scalar a times E one plus a scalar B times this vector E two. And then that just looks like component form looks kind of like this a zero plus zero B. So all this is to say is that these concepts are ones that we actually kind of work with all the time. So, so far we've relied a lot on the 2d picture. The real beauty of linear algebra in a lot of ways and one of the reasons I mean the primary reason it was developed is to kind of deal with a common notation and a common set of tools. So we're dealing with very high dimensional spaces so spaces of sort of arbitrary dimension so you know in physics we might think about the the space of sort of physical coordinates like you know 3d space or something in biology we might think about our space our vectors as as being maybe the abundance of different species in the community right and so if we go to a diverse community we might have hundreds or thousands of species and so we might be dealing with sort of vectors that represent the state of the community. That that are hundred that have hundreds or thousands of components we might think about the concentration of molecules in a cell whatever the kind of nice thing about linear algebra is is the tools we're going to develop can accommodate, you know anything from one to two to three to 10 to 100 to 1000 sort of components at a time. So, while it's useful to draw the 2d pictures and I'm really not much good at drawing anything beyond the 2d picture. It's useful to right away start thinking about dimensions beyond sort of other than the two dimensional pictures so let's take a moment and think about now the three dimensional picture right so we have the vectors s and t. Do these form a basis for the space are three so for the three dimensional space. It turns out no. So, as a kind of counter example here is a vector. You can try as hard as you'd like you can never write this vector as a linear combination of the other two here. We should have just pulled that vector out of there and we'll see as we go along kind of how we address this question more systematically. But this is to say that in in in the space are three to basis vectors is not going to cut it to span that space. We shouldn't say that s and t are not a basis generally so they're not a basis for our three, but they are a basis for some space. And there are basis for what we can call a subspace of our three so in sort of geometrically they form a basis for a plane a two dimensional space that we can think of as kind of embedded in our three that goes through the origin of the three dimensional space but is a kind of lower dimensional space within it. So so that's a perfectly good sort of vector space in its own right right it's closed under your linear combinations so if I take linear combinations of s and t I'm going to produce new vectors that are in that subspace. But but these two vectors don't span the entire our three space. So we can slip and use this word because it's really hard to avoid it. But this leads us to in the important concept of dimension so we're used to thinking about dimension kind of heuristically and linear algebra gives us a very kind of nice and precise way to think about the dimension so if we have a vector space. Define by by a basis so there might be multiple. There are in general many choices of a basis that define some vector space but given a basis we've specified a vector space. And that the the size of that basis gives the dimension of the vector space okay and and the the sort of way we've defined a basis. So this fact that the basis has to both span the space and be linearly independent. These two kind of conflicting requirements right so one is this like requirement that the set of vectors are sufficient to to span the space and the other is that they're kind of minimal. These two requirements mean that the dimension is a unique number right so any basis for for the same vector space will have the same number of elements and so the same dimension. Okay, so I've shown you that we can take two vectors and combine them to produce a new vector you know as a as a linear combination. I'm trying to go backwards so if if I have some arbitrary vector and I have some basis. How do I represent that vector as a linear combination of the basis vectors right so how do I kind of write it in the in the space defined by that basis. While it seems kind of like a sort of intellectual exercise, it's equivalent to solving a system of linear equations which is is really one of the most important tasks that that we have in science so linear systems linear equations show up just about everywhere. They, they are historically extremely important and really motivated the development of linear algebra. So, so let's kind of see quickly that these two exercises are really the same. So, so here the graphical picture is that we have some V and you we have some w. And I want to find what are these two scalar coefficients C one and C two, such that I can write w as a linear combination of unv. Let's look at something that at first glance is totally different right. So, here's a system of linear equations so this kind of problem arises all over the place. And we have here three unknowns x, y and z, and then we have coefficients for each of those unknowns and then on the right hand side of the equation we have these just just like scalars. So, what we can sort of see it if we write down the system in this kind of nice, nicely formatted way is that these coefficients to four and seven I'll get multiplied by x these coefficients one four and two I'll get negative two, I'll get multiplied by why these coefficients negative one two and negative three I'll get multiplied by Z. Kind of suggests is that this kind of ratio to four and seven are fixed right so by kind of changing values of x, we can sort of only change these values sort of in lock step with each other. And we can sort of see that we can actually rewrite this equation with this 247 now as a vector, and this x as a scalar coefficient out front. So we can really recognize this system of linear equations as a has exactly the sort of linear combination problem we were talking about right so now here we have some vectors 24714 negative two, negative one, two and negative three. We can find a linear combination of them, such that we produce this kind of target vector or this this like arbitrary vector, negative 143 and so we want to know what are these coefficients x, y, and Z. To do that we need to introduce the idea of a matrix and develop a little bit of machinery here but first maybe if anybody has any questions. At this point, before we keep going, feel free to raise your hand. So, yes, so I think it's a good moment to ask question if anyone wants to ask one either in the charcoal raising the hand I think we can take a couple of minutes to memorize any question. No, I think it's signed that everything is extremely clear. All right, great. Yeah, just keep moving. Yeah, so we're going to pause on that question for just a second to introduce this kind of new machinery. So, now we were looking at matrices so a matrix is just a rectangular array of numbers so before we were looking at vectors were switch which were sort of just a list of numbers now we're just looking at kind of a structured array of numbers. So the notation we're going to use consistently is that a matrix is denoted just by a capital letter so like a B or C. You can see here matrices can kind of have different sizes so that's kind of the first important thing to talk about the size of a matrix is given by the number of its rows so these kind of like things across here we call a row. And the number of columns so each sort of like of these stacks of numbers here is just a column and the the kind of form usual format that we give for the size of a matrix is like rows by columns. So the matrix down here a for example, is two by two. This one in the middle is three by three, and this one on the right three by two's okay so that the rows and columns don't necessarily need to agree with each other. And when we talk about matrices we usually talk about elements, why why we use elements for matrices and components for vectors don't ask me. The elements of a matrix are these numbers that make it up right so the so here we can talk in general about like an n by m matrix so a matrix with n rows and m columns and its entries are denoted by a little a ij so I telling us which row so a number between one and n and j telling us which column, which is the number between one and m so for example, this one here is in the first row of B and the second column so we would call that like be I mean B 12. This 10.5 here is in the second row and second column of C so we might call that C 22. So like as with vectors, we can sort of operate on matrices we can add matrices of the same side element wise so this is much like the addition of vectors right we have these two matrices we take two plus negative one, we get one. In the sort of two in the in the one one position in the one two position we add three and negative three we get zero and so on. So matrices can also be multiplied by scalars. So, as before, we multiply by a scalar and that means that every element of the matrix is multiplied by that scalar so this negative two just goes sort of inside the matrix to each entry. And you can follow the multiplication there. So, again, these, I think, lectures are going to be available later so if I fly through any of these examples, you can always go back and take another look. And a note that so again we have the sort of notion of a linear combination here. So linear combinations of matrices. So the vectors produce new matrices and what that actually tells us is that really matrices form their own vector space so we can take any two matrices the same side produce linear combinations we get a new matrix of the same size out. Okay, matrices come with two new and slightly more interesting operations transposition and multiplication. The transpose operation which we write with this little tea here, sometimes confusingly because it looks like a power but this tea is really a special operation. That takes an n by m matrix and just basically flips it on its axis. So the outcome of this transposition is an m by n matrix, where the now that I J entry of our new resulting matrix is was the J I entry of the matrix. Okay, so here's an example. If we take this matrix a and we take its transpose. So we just sort of flip it. And what was the, the sort of two one entry, the a 21 becomes now the one to entry so that that's where this 222 point two goes from here to there. Fairly straightforward matrix multiplication is worth talking about for a minute longer. So we can multiply matrices but the sort of multiplication operation is somewhat different than, than how we extend addition so whereas addition of matrices works element wise, multiply multiplication of matrices is kind of a new thing altogether. So if we take two matrices, a which is n by m and be which is P by Q. We can multiply these two matrices. Only if the number of columns of a. So this M is equal to the number of rows of B this P here. Okay, and we'll see why that in a second so in this definition for matrix matrix multiplication. So we take these two AB, each of its entries. So the AI, I mean, the I J entry, for example, down here is defined by taking the sum of many products over the entries of a and B. Again to get the I J F element of AB, we take the AI one plus B one J plus AI two plus B two J and so on. There's kind of in compact summation form. We sum basically over all of the columns of a and all of the rows of B. So that's these this K this index of summation for for whatever I and J. We're interested in. Okay, so here's that in action, very, very briefly. So, to get the product of these two matrices. Let's first think about the the first element of the product right so the the one one element which here is a four. We get that by multiplying the basically the first row of the first matrix by the first column of the second matrix so we have this two times negative one, plus three times two. So this is one of us negative two plus six, which is four. Okay, we can try to do this for any of the element, I mean any of the elements of the resulting matrix. So I'll just go through one more so this five down here which is in the second row in the first column, we get by multiplying the second row of the, the first factor matrix and the first column of the second factor matrix. So we get a negative one times negative one plus a two times two, and that gives us five. So it's worth spending a little time practicing this if it's not something you're familiar with. I'll leave the formula up here for a moment longer. Okay, so matrix multiplication. And it works a little bit differently than normal multiplication. And one thing right away that's a bit different is now the multiplicative identity so the element such that if we take the times the multiplicative identity we get be back. So in normal multiplication that's just played by the one, the number one right so if we take 10 times one we get one back in matrix multiplication that identity element is what's called the identity matrix and it's a matrix a special matrix that looks like this so it's an, an n by n matrix that has ones on the diagonal and zero everywhere else. And so it's worth kind of trying it out for yourself but if you multiply any matrix by this I n, you'll get just the make the original matrix back okay. So it's worth noting that this identity matrix is a is a very special example of, of a kind of more general type of matrix called a diagonal matrix so it's a matrix that only has non zero elements on its diagonal so on the for the elements like AII, a, you know a one one a to two only when the i and the j equal each other. And we'll see some more diagonal matrices soon. And another note, another sort of important fact about matrix multiplication is that it's associative so if we have a times b times c we can multiply a and b first or we can multiply b and c first it doesn't matter. And it's distributive so if we have a times the sum of two matrices we can distribute that that multiplication across the sum, but it is generally not commutative so a times b is not equal to b times a. And this is maybe made. Maybe you might have already sort of seen that this kind of has to be true because of the way we define matrix multiplication right so it depends on the the sort of interior dimensions this m and p being the same. So if we if we take now b times a then we have to worry about whether q and n are the same. These these sizes so that's just to say that a times b will not generally be the same as as b times a, even when the sizes work out actually. Okay, so matrix multiplication. We can actually now go back to our vectors for one moment. And matrix multiplication kind of a plot gives gives us some interesting concepts when we look at vectors so first, if we take the transpose operation we turn our usual vectors so I've been writing vectors as columns and that's just conventionally how we think of them as sort of vectors are almost like matrices that have n rows and one column. But if we take the transpose of a column vector like that we get a row vector which is now kind of like a one by n matrix. And if we take two vectors and we apply the transpose in to one of them. Now we can actually multiply them using matrix multiplication. So, for example, if we take a vector v and a vector you that are the same that have the same number of components, and we transpose the V. Then we have a one by n vector times an n by one vector. Okay, or essentially a one by n matrix times an n by one matrix. And so we can just multiply them using the usual matrix multiplication rule. This is often called the dot product or it's an example of an inner product. And you know you can see down here we just end up with V one times you one plus V two times you two and so on. And this is a really sort of important operation that that will come up all the time. So the interesting connection here worth seeing is that the inner product or dot product of a vector with itself so V transpose V is equal to the Euclidean length, or the norm of the vector squared. So, if we take V transpose V we have V one squared plus V two squared plus V three squared, that should be a squared sorry here. And then if we were to take the square root of this people get the normal sort of Euclidean length of the vector. And if we take the inner product of two different vectors, or even the same. The product is actually going to be a function of their lengths and their angles so we can explicitly sort of write it as a function of their lengths and angle. So V transpose you is going to be equal to you transpose V so this operation is is symmetric in this case. And this will be equal to basically the length of you times the length of the V so this the square root of the inner product you transpose you the square root of the inner product V transfers V times the cosine of their angles so in this picture here right this this inner product that comes out is a function of basically the length of you the length of V, and then this angle here formed by the two vectors in space, which which holds even in the kind of higher dimensional picture. And importantly, if V transpose you is equal to zero so if this inner product is equal to zero. So this will happen for non zero of length vectors. When when this cosine term is equal to zero. Okay, and so what that means is that the angle between these two vectors is a is a basically a 90 degree angle so that's a right angle. So this idea of orthogonality orthogonality generalizes the kind of idea of being perpendicular, or at right angles that you've probably seen in a geometry class. So that's just a little aside. Now back to matrix multiplication. It's often useful to view matrices as sort of bundles of column vectors. And when we. Now let's just think about multiplying a matrix by a column vector right so we have a matrix sort of arbitrary size, and we're going to multiply it by by a column vector and m by one matrix essentially. And so we can actually sort of rewrite this matrix multiplication as V one times this sort of vector now formed from the first column of a V two times this vector column vector formed from the second column of a and so on. So really, taking this product a times V is is giving us a linear combination of the columns of a. Okay, so this is bringing it all back to sort of the question we had, you know, maybe 20 minutes ago now. We talked about sort of representing a vector in a basis, or equivalently solving a general linear system. So these two things are both equivalent to the matrix equation AV equals B. So we have a matrix a the columns of which are our basis vectors. We have a V which is a vector of unknown coefficients that we'd like to figure out. And we have a B which is kind of an arbitrary target vector. So if this were a sort of normal equation in scalars right if this were just like four times V equals to the way we would solve it is we would multiply each side by one over four right by one over whatever this coefficient is. And with matrices we, we can't quite do that, but we sort of want to find a new matrix an inverse matrix for a such that when we multiply this inverse a times a. We just get V on the left so it's a very much sort of in analogy to what we would do solving just a normal kind of linear equation. So here what I'm writing is that this a inverse times a gives us the identity matrix. So if we could find such a matrix, then solving this, this system of equations would just basically turn into a matrix multiplication right we could just take that matrix a inverse multiply times be using the kind of out the formula we just saw. And then V equals a inverse B. Okay, so that's a nice idea. And then the kind of question is, you know, when can we find such an inverse matrix and how do we find it. Well, if, if a is is an n times m matrix there will be a unique solution of this form. So we need the columns of a comprise of basis for RM. And that comes from the kind of the fact that we were seeing earlier which is that we need the columns of a to sort of span the whole space in order for there to be to be guaranteed combination. And we need them to be a basis we need them to be linearly independent otherwise there there's sort of degeneracy here so to find a unique solution we need those columns to comprise a basis for RM and then the kind of geometric picture that we saw sort of guarantees to us that there'll be a unique solution here. We'll look a little more at that in a minute so this is just kind of a restatement of what I just said it's a really important kind of fact to keep in mind. In practice, we can write down formulas for the inverse of a they're very cumbersome and they're not really even worth looking at here. In practice we compute inverses and generally solve matrix equations using numerical algorithms so maybe you've used these algorithms and are anytime you've solved for a matrix. And it's sort of a kind of thing best left to numerical algorithms but it is worth thinking a lot about when we expect there to be an a inverse and sort of being able to work with the inverse of a symbolically. Okay, so one other way of looking at matrices is as representing linear transformations from one vector space to another so what's a linear transformation, a linear transformation is a kind of operation that maps a vector space into a different vector space. It encompasses a lot of the kind of mappings that you might dream up like rotating the vectors around the origin scaling them, you know that is like stretching them, reflecting them and the composition of these operations. The sort of formal definition for for some function to be a linear transformation is that whatever this function is it has it has to distribute over sums. And when we multiply, when we take the function of some scalar times a vector, the scalar has to come out of the transformation. So, so all of these operations that I mentioned appear satisfy this property. And I found out that if a is an n by m matrix, then the sort of multiplication a times X actually describes a linear transformation from the space RN to the space RM. And so any linear transformation can be expressed this way and any matrix can be thought of as corresponding to a linear transformation. So here's just a little picture if we take some vectors, and we map them now to the to a new space by multiplying by a so we take the vectors that I've shown here we multiply them by this matrix I show above. We get some some new vectors so this this one one vector here kind of becomes gets sort of rotated and stretched. Like negative one negative one gets rotated and stretched so we can see there's this composition composition of these different operations so I just draw four vectors of course they're infinitely many vectors in this space and they all get sort of rotated and stretched and and scaled by this operation. So, so this gives us kind of another meaning for the matrix inverse. If we take one matrix times another what we're doing is we're basically just applying sequentially two different linear transformations so we're like first rotating and stretching one way and then we're rotating and stretching some other way. And in particular if we if we take multiple multiply by the matrix inverse that matrix inverse is a new transformation that undoes the old transformation so if we take, we first took our original vectors we multiply them by a to get this kind of new to be mapped into this new space. Now if we multiply by a inverse, we actually just map back to the original vectors. Okay, so, so here's the inverse matrix so again this is kind of two rounds of matrix multiplication that takes us from our original space to a new space and then sort of back to the old vectors we started with the linear transformation has. So we start with some vector space we apply a linear transformation. And we're going to generate some new vectors right. The, the set of all possible vectors all the vectors that we can generate by taking this linear transformation is called the range of the transformation so in this kind of picture we start in this RN space and we're mapping through this RM space. So subset of vectors, potentially the whole RM, but maybe just a subset that that are initial vector space maps into and this is called the range of the transformation. And the dimension of this of this range so this, this range may be again the full space RM or it may be a subspace, and its dimension is the rank is called the rank of the matrix a or of the associated transformation. And equivalently, that's going to be the dimension of the span of the columns of a so if we take a look at the columns of a the number of linearly independent columns of a essentially the closely very closely related concept is that if we take the set of all vectors in our original space such that a times x is equal to zero this is called the null space of a. So these are all the vectors that get mapped to zero basically by this linear transformation and the dimension of this space so now this is a space that lives in our, in our, in our domain in our end. So the dimension of this space is the nullity of a. And a very important result in linear algebra that unfortunately we don't have the time to really prove or to do justice to is that the rank of a plus the nullity of a is equal to to m to the, the dimension of the space that we're, we're mapping to and closely related to to both of these concepts is a characterization of when a it's possible to write down an inverse for a matrix a. So a is invertible that is it has an inverse, if and only if the linear transformation associated with it is one to one. Every vector in our kind of initial space has a unique vector in the, in the range that it maps to. And, and basically because of this rank plus nullity statement above, we can equivalently say that a matrix a is invertible if the rank of a is equal to m, or if the nullity of a is equal to zero. And one kind of immediate thing we should see here is that for for any of this to really be possible, we need the, the n and m to be the same. So if we're mapping from vector from a vector space of one size to a vector space of a different we're never going to have this one to one mapping. So, what that kind of tells us is that only square matrices only matrices that are n by n are going to be invertible. Okay, so that's that's several kind of characterizations of when we expect it to be an inverse and again we're not really getting into the computation, but that's okay. So none of these are really sort of practical criteria that are easy to check right they're kind of nice ideas but how do we actually sort of check if a matrix is invertible if it's what its rank is or what its nullity is. Well one way to do that is to use a kind of fundamental summary statistic for matrices called the determinant. The determinant of a is that it's a kind of nice number that shows up everywhere it kind of describes in a kind of hand we be way how volumes are scaled under the linear transformation associated with matrix a but it but it has one really important property that relates to our question about inverting matrices. So here I'll just show you a little example so there's an analytical formula for the inverse of a two by two matrix that's actually not too bad to rate down. I wouldn't recommend trying to rate it down for any bigger matrices. But a two by two we can do, and, and it's just on the right here. And you'll see that in this formula one over the determinant of a appears okay so this number determinant of a. is basically a polynomial in the entries of a in general it's a kind of complicated polynomial so again I recommend. You know usually we just use sort of numerical algorithms to calculate this number. In any case, it appears in these formulas and so here, this might lead us to suspect that the inverse of a is not defined when the determinant of a is zero, right because then this becomes one over zero. And this is just kind of a a hunch that we might develop but it turns out to be a generally true fact. So the matrix a is invertible if it only if the determinant of a is not equal to zero. And again there's a kind of nice formula for the determinant for two by two but quickly the formula becomes quite complicated as we go to higher dimensions. Okay, and so, so now that we kind of have the determinant a hand we get to our final topic that we're going to have to treat a little quickly my apologies. So this is really one of the most important in linear algebra which is the the idea of eigenvalues and eigenvectors so the eigenvalue problem. So this the kind of question we're trying to answer in the eigenvalue problem is which non zero vectors have their orientation unchanged under linear transformation. So I take some vectors on the right here, and I apply some linear transformation, they get mapped into new vectors. So a few of them very special ones don't change their direction they are only scaled right so so this vector that was originally pointing up gets kind of rotated and stretched. But this vector that's following the one one line just got stretched and not rotated. And so the linear, the, I mean the eigenvalue problem asks, which are these kind of special vectors that just get stretched. And kind of more symbolically we have this eigenvalue equation, which is a times x equals lambda x and this lambda here is a scalar that we call an eigenvalue. And these X's are sort of unknown vectors. So, we want to figure out what our solutions, what are X's and lambdas that make this equation be true that make it true that the multiplication by a just returns us the the original vector we had possibly scaled, kind of stretched but not not rotated or reflected or anything like that. So these these unknown vectors are called eigenvectors and the scalar here lambda is called the eigenvalue. Okay, so we can try to solve this problem. And what we might kind of naively do is to say okay well let's, let's get all the stuff on one side of the equation so we can take our ax, subtract lambda x, so we get this equation on the second line ax minus lambda x equals zero. And what we might do is we might say okay well let's factor out the X vector X. And to do that because we're doing matrix multiplication and here we have to be a little bit careful. When we factor out the X from this scalar we need we need this scalar, like lambda to still kind of be compatible with addition with the matrix a so we get lambda times the identity matrix right so this. A minus lambda times the identity matrix times X. If we distribute this X we get back the equation above. Okay, so now this is really a matrix equation sort of like the ones we looked at before. So we have a matrix a minus lambda identity times a vector x equals zero which is a vector of zeros here so it's just some target vector like we talked about before. But it, but it's, it's not quite as simple as, as before because we noticed that zero, the vector x equals zero is always kind of a trivial solution that solves this problem. Okay, but we're not interested in that zero that trivial solution we're interested in non zero vectors. If this matrix a minus lambda identity is full rank, then the discussion we just had tells us that zero, because zero is a solution it must be the only solution right because the mapping would be one to one. So that would imply that if zero is a solution, it's the only one. So what that tells us if is if we want to find sort of non trivial solution non zero X's, then we need the this matrix a minus lambda identity to be singular to be non invertible. Because we need the transformation now to be not one to one. So that's actually kind of nice because it tells us that what we need is we need the determinant of a minus lambda I to be equal to zero right that's the criterion we just mentioned for a matrix being non invertible or singular. And this expression here determinant of a minus lambda I equals zero is called the characteristic equation for a. The first slide is a polynomial of degree and which we call the characteristic polynomial so again this determinant here. There's a sort of formula for it and it's going to be a big big and degree and polynomial in the entries of the matrix inside. And then now we can just sort of apply some results from from algebra so if we have a degree and polynomial that that equation is going to admit and solutions and roots, if we count them with multiplicity we count repeated roots potentially. So we're up to n distinct solutions and distinct values of lambda, and each value of lambda comes with a kind of associated x and associated eigenvector and we call these together, an eigenpair. But one important caveat is that we have to admit complex eigenvalues and eigenvectors potentially right so even so as soon as we start having an n degree polynomial. And even if it's just a quadratic or something. It might have complex roots. And we have to allow that kind of geometrically that we can think about that as like that there's no guarantee that a linear transformation has real eigenvectors and one example is I mentioned that rotation is an example of a linear transformation so rotation would be like take a vector space, rotate every vector by 10 degrees. If we did that it's it's fairly obvious that there there is no, there's going to be no vector that's doesn't have its direction changed right. So, so in that case there will only be complex eigenvalues and eigenvectors. So if our matrix a already is a non invertible. So matrix matrix a on its own has a rank k less than n, then there are going to be zero eigenvalues. So basically they're going to be these choices of the or, or eigenvectors such that a times v equals zero so basically if we go back to our characteristic we don't even need this lambda identity to make determine equal to zero so there there are just choices of, of eigenvectors that already kind of satisfy that equation. So, those, those are going to be eigenvectors with associated values of zero and the rank is going to be equal the rank of the matrix will be equal to n minus the number of zero eigenvalues. So, to our final kind of big laundry list of statements here that is worth spending a little bit of time thinking about. But hopefully at least very quickly we've discussed the ways that all of these statements are equivalent right so a being invertible the columns of a forming a basis for our end. This system of linear equations having a unique unique solution for any be the columns of a being linearly independent the determinant of a being on zero the rank of a being and the nullity of a zero and a having no zero eigenvalues. So I think we're just about out of time here. Yes. We have, we have a five or 10 minutes more. If you mean, okay, so the one is that that's kind of. So that's the the eigenvalue problem in a nutshell. And I mentioned that the eigenvalue problem has potentially n distinct eigenvalue, I mean, and distinct solutions basically and distinct pairs of eigenvalues and eigenvectors that satisfy it. And so we can imagine collecting all of those eigenvectors as columns of a matrix Q. So shown down here, and all of their associated eigenvalues in a diagonal matrix, lambda. This diagonal matrix is just a matrix that has like AII or lambda one one equal to something or something non zero lambda to two equal to something non zero and all the off diagonal elements equal to zero. So, so we can form these two matrices okay. And if we form these two matrices. We have now this kind of bigger matrix equation a times q equals q times lambda that unfortunately I don't think we have time to really kind of walk through this, the multiplication of these things but this is a way to sort of simultaneously write all of the solutions to the eigenvalue problem. So that that formulation that we had before ax equals lambda x. And this is a way to sort of write all end solutions to that at once. And we can see that if this matrix Q is invertible, if it has an inverse, then we can multiply by that inverse to get down below. So Q is equal to Q times this matrix big lambda times Q inverse. So again this is only possible when Q is invertible, and that's only possible when you know one of those many characterizations holds true. So what it what it really means. One thing it means is that we must have n linearly independent eigenvectors okay. When that holds true, we can write a in this way. And what this tells us is that the matrix a is completely specified by its eigenvalues and eigenvectors right it can be written solely in terms of its eigenvalues and eigenvectors so this question that we started with that looks sort of like kind of a funny question ends up being a complete characterization of a matrix. So if we know all of the eigenvalues and eigenvectors we know the matrix we know everything about it. But the eigenvalues alone actually are often highly informative about a matrix so these, the set of eigenvalues is called the spectrum of a. And in many, many cases, for example in biology in looking at different models, dynamical systems, the eigenvalues alone of a contain a lot of rich information about a. So an important example that I'm sure you will see in these talks if you haven't seen before. And this is just to give you kind of a little hint of the value of thinking about these eigenvalues and eigenvectors is that if we have a matrix difference equation like xt equals a times xt minus one plus B so basically a system where at each new time we get the state of the system by multiplying the old state by a matrix. So this kind of system converges to an equilibrium if and only if the maximum absolute value of the eigenvalues the matrix which is called the spectral radius is less than one. Very closely related to this, the matrix differential equation dx dt is equal to a x plus B, which is again a very kind of general very sort of useful model. And this is if and only if all the eigenvalues of a are negative. So these are cases where just knowing these n numbers is kind of totally sufficient to tell us about the behavior of a system that really has n squared parameters that that's governed by this whole matrix. But these eigenvalues are actually sort of incredibly rich source of information about it. All right, so I'll stop there. Apologies. I don't know if there's time for any questions. We have like two, three minutes for questions. And then we can take a short break before the next lecture so please if you have any question, you know, how to ask it either post it on the chat or raise your hand. So there was a question, a little bit back on the eigenvalue asking if the scaling factor that appeared in equation was the eigenvalue. Yeah, you can ask the question. In one of your slide you showed that, yeah, the direction remains same, but only the rotation of the vector changes. And that's for the eigenvalue. You showed that. Yeah, so like in the slide that I have here. So these vectors. Lambda is the scaling factor here. Yes. So the lambda is the is for the eigenvalues, the lambda is the scaling factor. Yeah. Yep. Yeah, so just for example like in this picture here, this vector, like one one, or this factor size starts as 22 and it becomes 33. So the one of the eigenvalues of this matrix is three halves. It's scale this these vectors become scaled by three halves when we multiply by a. So the question by Augusto saying how do we determine the eigenvalue of a rectangular system, if it is even doable. That is an excellent question. So, yeah, I actually kind of omitted that so for for the for this kind of interpretation that I gave where this vector has its direction unchanged under multiplication by a. That really implies that that the matrix must be square right if if if this vector on the left has a different size than the vector on the right. Then it doesn't even really make sense to talk about the orientation being unchanged. But that said, there is an analog to the idea of an eigenvalue for a rectangular matrix. And that is if we take now the matrix a transpose a. So this this matrix is we can think of as kind of like a covariance matrix of a, but it's a matrix related to a that that now will be square. And so being square, it will have eigenvalues and so we can look at the eigenvalues of this matrix which is related to a, and these are called the singular values of a. And they, the theory of the singular values of a is a little bit more complex we don't really have time to get into it but but they they kind of tell you tell us much of the same information as the eigenvalues would so. Yeah, if you're interested in learning more about that problem, the thing to look into is the singular values of a or the singular value decomposition which is in many ways is sort of analogous to the eigenvalue problem. Thank you. So, is there. Well, I think. Well, let's ask, let's answer the last question by the basmita, and then I think we can take a short break stretch legs get a cup of coffee. And before the next lecture so the business, what happens if we get an eigenvalue equal to zero, what will be its geometry representation. I see. Yeah, so so an eigenvalue equal to zero is basically a direction. So this matrix a is a direction where the multiplication by a just maps us to zero so we can kind of think in this picture that if if this eigenvalue for say this direction here with zero. If it was zero, when we multiply by a we would the matrix would just become, I mean the vector here would just become zero so we would kind of just map to the origin so these are directions where multiplication by a just kind of kills the vector that's there. And what this tells us is that X is in the null space of a so. Yeah, I'm not sure if that's hopefully that that gives some sense of the geometry. Okay, great so. I think is a good time to stop. Thank a lot. Zack very much for the for the very nice tutorial which again, it will be available on YouTube for the next generation to come. So thanks again also for doing that very early in the morning. And of course, what we're going to do now is to take a four minutes break. We're going to be again randomly assigned to break out rooms. So feel free to chat with whoever you are assigned to or to take a break, stretch your legs and we'll be back in four minutes. Thank you very much. Thank you. Isaac. Good morning. Hi, Stefano. Hey Stefano. Good job. I caught only the last few bits. But thank you for doing that. Yeah, my favorite part. That's great. Just to mention that we are live stream on YouTube so. Hello world. So I think we should be divided in breakout rooms. Okay, so I think we are going to be back in a minute or so so then takes a minute for people to leave the breakout rooms. And so in the meanwhile, I can remind to the people that are watching on YouTube that they can post their question in the chat and I'll read it. And also, I'll pass the link to the lecture notes of the next lecture by Stefano Lesina in the YouTube chat so you can also get the material there. And let's wait for the zoom to join back and then we can stop. Okay, so I think people are back. So, before I introduce the next speaker just the usual info about etiquette so if you want to ask a question please either post it in the zoom chat or use the raise and tool on zoom that you can find under participants there are three dots. You can raise hand and when I think Stefano will stop every now and then to leave time for questions. Then I'll sort of give you the possibility to talk or I'll ask the question or the app. And I saw in the breakout rooms that there are many people that are alone in a room waiting for others to join. If you have the version zoom five installed, which you can install for free. You have the possibility to change room so if you are alone, and there is no one to chat with you you can also change room and find someone else to chat with. So, great. So, after this, I, it's my great pleasure to introduce the next lecture and Stefano Lesina Stefano got his PhD from the University of Parma and after a postdoc at the University of Michigan and that Nancy's moved to Chicago, where she is actually a professor in the Department of Ecology and Devolution is the most recent research has been focused on the theoretical understanding of large ecological communities, and they will give three lectures on the theory of ecological assembly. So thank you very much Stefano for being with us. Thank you. Yeah, good morning. Good afternoon. Good evening everybody. So my name is Stefano I'm broadcasting from the beautiful campus and University of Chicago, where the weather is nice but a bit cold. And yes I have three lectures on a some sort of like theory of assembly of ecological communities. So three lectures unfortunately are not exactly the same length. So, so don't worry if we cannot go through the whole first lecture. The other ones are a little shorter. So we'll see we'll take it as it comes. And so I have very extensive lecture notes that I'm posting on GitHub, I actually put the link to the lecture notes and the GitHub repository in the, in the chat. And the idea is basically that you can build all these lectures on your own computer and in fact you can play with all the code that is provided to do that you need to install our and then you need like these packages that would be needed for the calculations. In terms of questions like that Jacob was discussing before I would like maybe divide them into categories. The first category is questions that are really like needed for understanding what's going on. Jacob or that is man in the chat sees anything that means like people are not following a then in just interrupt me. In fact, like somebody already saying I cannot download the lecture notes. So like if you click on this link, right, that it's provided, and then you go here in code. And then you say download zip, you can download the zip file with all the lecture notes. And to see the lecture notes you can just put this address that they put in the chat in your browser and you should be able to follow the lectures. I follow these lectures religiously because they wrote them especially for these series of classes. And so you should be able to follow it precisely what I'm doing. Then there's other types of questions which are like curiosities or like extensions or things like that. Those I will try to keep for maybe the last five minutes or so. All right, that said, we can get started. I just give a very brief, very partial overview of the history of ecological assembly in and what what is ecological assembly simply like the process by which ecological communities are built. I imagine that there's a process by which species enter in a certain system right increasing the richness like the number of species in the system. And then there's the opposite process which is like extinctions by which some species disappear from the system right so the interplay between these kinds of colonization invasion, or like a immigration and extinction. There's a sort of balance by which we built this ecological communities. The, the, the typical setting actually is something like this right that we have some sort of mainland, or like meta community imagine like a bunch of species and here each species is a different like symbol. And then every soft and we have these immigration event right like this idea for which like a certain species say these diamond species entered the system here in this island or local habitat or however you want to call it. And now these two species for example can start interacting and what is the outcome of this interaction. Maybe these two species will coexist in the local habitat slash island, or maybe one of the two will go extinct, or maybe both of them will go extinct right so so we have different outcomes. And then after a while, maybe these other speeds like triangle comes back into the system. And then again we restart the dynamic so so this is basically the idea of this ecological assembly. And this type of a island mainland type of model has been contrasted with other models that is that are based on species traits. So so now we're thinking, there is a certain environment is environment a certain condition you can imagine like temperature right and so maybe some species will thrive at this temperature because of their trades, and some other will not be able to grow. And as such what what the environment does is impose a sort of selection on the trade of these species so so these typically these models are more focused on trades of the species rather than their identity. And, and these process by which the environment selects for certain species and not others is called environmental filtering. And so you can read a lot about environmental filtering, and this typically leads to what to the fact that we're selecting for certain trades right so the trades will cluster together something that we call trade under dispersion. Now, of course, species can interact with each other, right. And imagine that we have a community of competitors, then these competitors are competing for the share resources and as such, they cannot really coexist if they're too similar in their environment of their resources. And this would lead to what to a separation of the trades of the species so the opposite of like this convergence that we were seeing before. And this is called over dispersion. And again, this is like a view of these assembly based on trades rather than a on on species identity. These teams went on for basically forever like as long as the discipline of ecology. And, and in fact like I put here like a book that is like 25 years of discussions on assembly and reading this book I was really like taken by the introduction and actually I suggest reading it like maybe without too many prejudice, but but but it is a very forceful introduction and somewhat bizarre. And in fact that immediately after I found this review by Nico Telly 1999, that again picks up on this introduction and I think that the language of the introduction is an embarrassment to the discipline, which also tells you that not only this is a very old problem right that of the ecological assembly but it's in fact one of the most debated heatly, like heatedly debated problems in ecology. So let's go back in history, as I was saying this is very old history, we can find examples in 1899, when when Harry calls. Actually it was at the University of Chicago, and he developed this theory that we call the theory of succession by looking at Indiana dunes which like are basically half an hour south of my office on the shore of Lake Michigan. So you have these sand dunes that you can imagine like start building up. And what happens is that some species land on this like sandy dunes and are able to colonize these sandy dunes we call this the pioneer species right. And by doing this they stabilize this doing somewhat, and they allow for other species to come in right and so we have this kind of sequence of species coming in and replacing in fact, each other to some extent. And what is very interesting is that because they there's so much like wind in Lake Michigan, every so often these dunes are blown out. And so the process restarts. And you can basically date the dunes how long is this doing been developing for by looking at the vegetation so so what you can really do is some sort of like substitution of space for time right like that by looking at different places I can see dunes of different ages. And as such I can basically reconstruct the succession of vegetation along these and calls. And in fact, like even other people like Clemens had this very strong view that these was somewhat of a regular order deterministic process right like that there is like the sequence that each doing follows pretty much with some fidelity like the same sequence of vegetation. This is in very much of a contrast with like, for example, the view of reason that in 1920s started advocating for a much greater role of chance, right so a lot of the debates we will see like a democrat who's like the big philosopher would be pleased that a lot of these discussions are chance versus necessity like that a lot of these as this very basic philosophical point. Another case like of something that created a huge debate in the literature is the work of Jared diamond, who was studying the Indian like bird assemblages in Island of the coast of New Guinea. And what he noticed is that certain combination of species were never observed equal these forbidden combinations and the idea was that these were prevented from happening by competition, for example. And the idea there is like now if I look at all the islands right and they look at all the possible patterns. If I see which one are forbidden. Maybe I can learn the rules of the game of assembly by reconstructing it like in this inverse way right of looking at all these combinations and then finding what's going on in the background. And these work needless to say, cause an intensive debate, like what is called like the new model wars that occupy the colleges for a lot of like the 1980s and 1990s. And again, are these patterns that we are observing the outcome of some deterministic necessary process, or rather, they arise by chance. And this actually led to a fantastic like development of new models of like, what should we expect when we have a certain number of species interacting with which probability are we going to serve these particular like sequence of presences and absences. And this work was spearheaded by then symbol of a corner got a leader there are very, very good reviews I'm pointing here to a book on new models in the college. Again, like transfers necessity. If we look at the early 2000s, like the work of Steve Hubble on neutral theory, right, what should we expect is pieces were simply not to interact or where somewhat equivalent. In that case, like what we observe in terms of pattern, for example, species of under distribution would be basically driven by stochastic fluctuations right species will take a random walk in this space, and then we would end up with some sort of like statistics. And that would be enough like the statistics that we can observe and measure in nature are somewhat compatible with these assumption of neutrality. And again, these Spartan men's debate facing like neutrality versus again like necessity is represented here by niche dynamics meaning like these are driven by species interactions, like competition. Like avoidance in kind of competition. Finally, I'm just gonna, as you can see like here you have like readings for for several years of studies. The idea that kind of came out of these assembly is the idea of community phylogenetics. Right, we know that we can reconstruct to some extent, like the evolutionary history of species and that is evolutionary history. There's a lot of information on what happens in terms of trades and resource utilization and whatnot. In such like what you would think is that you can relate phylogenetic data, for example phylogenetic trees with a community college right so this idea of community phylogenetics is exactly centered on this. This is a community ecology from like late in the 19th century up to today. What I'm going to do today is actually take like some sort of like an older view. This is like models that were developed when I was saying high school, you know, like, or early in my college years. And so we will examine some sort of a basic model of ecological assembly, and we will try to ask some sort of intelligent questions, and for which we can get an answer when we make a certain number of assumptions. For our exploration just to keep things simple. What I would like to use is like what is arguably like the simplest model for for population dynamics, which is like the generalized local model. Maybe you've seen it in a different form. Let me just write it in component form first. So these you it's one of like the six equations that we teach all the colleges in the world. So we're following like the density of population say I a time T, and we're following this population in time so we write a differential equation. And typically what we write is like X I, so the population density itself times our I our I is what like is the intrinsic growth rate you can think of these growth rate as like the growth rate that this population would have when grown alone at very very low density. So we can serve as two purposes like a growth rate if this is positive or a death rate if this is negative right if I put lions, you know, like in a nice field with no animals to eat they will die. And then we have the interaction part like we these pieces interact with all the other species to some extent in the community so we take like the sum of a J of a I J. So this is like in component form. I rather prefer a writing this in matrix form, which basically say this is a vector now of growth rate the XBT and be of X here with big D means like a diagonal matrix with X on the diagonal. And how come that all these things. And then are now will be what like a vector of growth rates of intrinsic growth rates and a is now a matrix of interactions. Okay, and they assume there are any species, and like I just labeled the species from one to one in whatever order you prefer. So for a single species, and maybe you've seen these when you when you've done the tutorial on nonlinear dynamics. This model can only have very, very few outcomes that are not especially interesting. So species by itself can either grow exponentially. And then go extinct, or it can reach some sort of an equilibrium like some some level at which it stops growing or shrink. When we include two or more or more species we can have like cycles, neutral cycles or even like limit cycles that would even be stable like meaning attracting a different trajectories to the same limit cycle. So here more species we can even find chaos like chaotic dynamics. So I have code in the repository that you can download to integrate like the dynamics of this model. And here I'm just loading a particular example that they found just like by by searching in which we have a certain metrics of interaction a and a certain growth rate are such that when we integrate the dynamics, what you see here is that these pieces keep cycling for forever, like this is a good example of a limit cycle right so the species will not stop the dynamics ever they will oscillate up and down in this fashion for for forever. Okay, so this is one of the cases of limit cycle you can find similar cases with like chaotic dynamics which you can think of as like the cycles that however never close on themselves right there are known periodic cycle. So one thing that we can do in this model which is in fact the simplest thing that we can do in this model is to look for fixed points right so are there particular densities of the species imagine like a vector of density such that the dynamics at that point are fixed right there they stop like moving. Because of the form of the equations right like what do we have that we maybe put an annotation here right so our equation is like the of x times r plus a x and these has to be zero for the dynamics to have stopped. And so you can see that there's basically two cases either this part is zero right like the part we multiply something by zero and that it's zero or the part within the parentheses zero which is more interesting right because in this case the species might be present at the positive density. And so then let's try to find the solution the solution exists when when these metrics a is not singular and I've seen that Zach just told you like a bit of linear algebra so now you know what I'm talking about. And so basically we want to solve a of x plus r is equal to zero which we can write as a of x is equal to minus r. And so like what we would like to do is to somewhat divide by these metrics of course you cannot divide by a metrics but what you can do is to multiply by the inverse of these metrics both sides. And then you find the solution right you find the solution as long as these metrics is not singular meaning there's no zero I can buy. Now we get a solution, it could be a solution that contains some negative numbers and even for a theoretician. I've never seen minus three turtles or like minus seven elephants so so these would not really be a feasible solution to our system because like species have to have positive or at least non negative density. But if such point exists right so if we have a solution for the system that these old positive components we call these are feasible equilibrium for the system and it is unique right because we're solving a system of linear equation that has a single a solution as long as a is not a singular. Right, you have code in our to do what to solve like the system. You know, if I want to solve a x is equal to be, I can call solve of a comma B. Right. And so we just do that with a and RB is going to be minus right. And so if we do this for the system above like the one with the limit cycle that we had above. What we find is that there is a positive a feasible equilibrium right and so these are like the numbers here. And in fact, if you go in the plot again. These horizontal line that are desk here. This is exactly the solution of this of this thing and you can see it like that for example this purple species oscillates up and down and the equilibrium is contained in the range of this oscillation and the same is true for green the same is true for red. The same is true for blue. Okay, that kind of gives us some idea of what's going on. In fact, you can even do a little more I didn't include like the theorem or the proof here, but but you can see how power and see when the pay the book that is referenced here. You can show that to have coexistence of a certain number of species in generalized lot of Altaira, you have a to have a feasible equilibrium in the interior right so you have to have a point where all the densities are positive. You can do for the system. Now this equilibrium might be stable or unstable. And of course like to have, for example, limit cycles, or to have chaos you need like an unstable equilibrium. Why did you want to have all the dynamics to all the trajectories to converge to this point this point has to be stable. So next, what should we do. We should think about stability. Okay, have you covered like local stability analysis. It's my question for Jacob or for whatever can answer in the notes or should I go over it like we have I mean it was discussed in one of the tutorials but I think it's always good to very quickly remind the audience. Right. So just like as a reminder because you already should know about this but but maybe you forgot. So we have a fixed point right so we have some point at which like dynamics are stopping. And what we want to do is to say, okay, but our trajectories that start very, very close to this point going to converge to this point, or rather they're going to spiral away. And then it will go extinct right so so if we want to have like coexistence at an equilibrium point it better be attractive right like you should have like that if you perturb the densities of the species a little bit. Nothing much is going to happen. And the way you do this type of analysis in fact is to take like the system. And then Taylor expand around it right like Taylor expand like the dynamics or so imagine that I'm saying just like something like I want to track in time. Let me just put an annotation. I want to track in time what the perturbation say the X. In time right so imagine this delta X is like my equilibrium plus minus some very small quantity in fact infinitesimal is more quantity and then if I if I Taylor expand this what do I get like I should first put like my equations right this would be the generalized equations evaluated the equilibrium but this is zero. And then we would get what a like some sort of Jacobian metrics evaluated the equilibrium point times my perturbation, and then plus some higher order terms right. And let's say that we don't care about the higher order terms because around like an equilibrium if the perturbations are small, everything looks like a linear system. And so then you can see that we have like this is zero. These we removed. So we end up with just like these Jacobian metrics. And this is a linear system of differential equation which we can solve. And what you can show is that these Jacobian metrics at equilibrium has to have all the eigenvalues with negative real part for these equilibrium to be attractive right so that's the idea of in local asymptotic stability. If that's the case, provided that we start arbitrarily close to the equilibrium eventually like these trajectories will either converge to the room or at least not move away. So we're left with the way of writing like these Jacobian metrics. And so call f of I, you know, just like the lot of all therapist species I what we need to do is to just take like the partial derivative with respect to every other species that's the definition of of a Jacobian. And if you do that you find like that we have for the term ij in the Jacobian we will just find a ij times Xi. And then we have a slightly different formulation for the diagonal elements of these Jacobian right now we're we're deriving with respect to Xi. And so we find like the growth rate for species I the interaction with all the other species J plus another term that comes from the fact that this is really a quadratic equation in I right because we have something on the diagonal multiplied by Xi and we have excited outside equations and as such we get this quadratic term and this would lead to this extra term AII Xi. However, at equilibrium what do we have that this is in fact what's inside the parentheses right when we write our system like that. And we're saying this is zero but this is exactly the same term. So this whole thing is zero. We end up with something very simple you see like the element ij is AIJ Xi or AII Xi, which then means what that we just have like the metrics of interaction a, and then we multiply each row of these metrics by the corresponding equilibrium when we evaluate these a Jacobian at the equilibrium which means simply like that we substitute to every xj and Xi their equilibrium value. And these Jacobian is especially simple for the lot of altera model. And we know that these Jacobian evaluated the equilibrium which is called in ecology the community metrics and has to have eigenvalues with negative real part. So now I have a question for you, which is, what do you think will happen. What would you think will happen to the Jacobian in the case of the limit cycle we've seen before. Right, we said like this, there has to be an equilibrium point we actually found that there is a feasible equilibrium point it was sort of in the middle of all these oscillations. So if we were really like stable, as long as we get like to the cycle close enough to the equilibrium it will converge, right, and then we would have just like fixed densities from from then on. Right, which means what that this equilibrium should be unstable. All right. And in fact, like we can do the same that we're doing here mathematically we take this like a star like my equilibrium to be the solution of the system with a and then we create these metrics and by multiplying a diagonal metrics of x stars times this is like the way are does like metrics multiplication a and then we can look at all the eigenvalues of these metrics and and see whether all of them have negative real part. Now in ecology, we mostly are concerned with what with matrices that have contain real numbers right where all the coefficients are real numbers. As only basically two types of eigenvalues these eigenvalues can either be real numbers themselves right so imagine like numbers on the typical number line that you think about, or they could be complex numbers but when they are complex numbers they have to be paired right what I mean is that you can just like draw these eigenvalues in the complex plane. So imagine that this is like my real part of the eigenvalues. This is my imaginary part of the eigenvalues and so every complex number is mapped into this plane. And so my eigenvalues could either be real numbers right which would appear here on this line, or if they're complex number and I have a number here I have also to have another here right so so like they have the same real part and imaginary part with opposite and so let's look at these, we have only four eigenvalues here right and so these one for example you can see that there's no imaginary part the imaginary part is zero so these would be a negative eigenvalue that would be somewhat here let's say we have two paired complex eigenvalues right that have positive real part here 0.37 and then they have like imaginary part that are coupled 2.06 and minus 2.06. And finally we have another real eigenvalue here minus 0.61 are all the real parts of these eigenvalues negative. No therefore this equilibrium that we had for the limit cycle is locally unstable right like the if I if I start even close enough to this trajectory it will not go back in fact what will happen is that it will move away from the equilibrium at least initially with a speed that depends on these eigenvalues right the smaller the slowest in the direction that will be the one given by the eigenvector corresponding to this eigenvalue or in case of like complex numbers like there would be two eigenvectors that determine these oscillations away from the equilibrium. Alright, so so just to say this is how we would go about doing local stability analysis and we will see in a second the way to do even stronger. But before we go there. I just wanted to show that this equilibrium is in fact very important in large number of much more so than in other models. Because when we have dynamics they're not like fixed point dynamics right so imagine that instead of like converging all these trajectories of the densities of the species in time to a point. When they oscillate or they have some chaotic dynamic the equilibrium, even though it has to be unstable and we just saw an example of that. It still contains a lot of information on the system and in fact, it provides us with like the average density of the species in time, right. And to do this, what you can do is like just to say let's take the average in time of the density of each piece right and we can just write this as the integral and to make things much simpler. What we're going to assume is that we're in some sort of cycle right and that we choose like time zero to be when we start like the cycle and time T. We end the cycle such that you know after a certain time big T we are exactly in the same place where we start right these will make things a lot easier. All right, and of course we're interested in what in cases in which like these X of T is always positive right species don't go extinct in the cycle because otherwise they would not be part of the cycle. And the such we can assume that all these numbers are positive. And if all these numbers are positive we can take the log of these numbers right so we can take the logarithm and in fact we can write an equation for the logarithm of Xi of T in time. And the way you would do this like this what the of log X DT is simply like the X DT divided by X. So it's very easy so we can divide each equation by Xi of T. And so what you can see is that this just like gets rid of the part that we had in front of our equation right so so if I write like another way to write this type of equation. The little more compact is to say X dot is the X DT is the of X times R plus a X. Right. And so now we want to take the log of X in time right like this is going to be a X dot divided by X and the such we get rid of this X right so we get R plus. Okay, so that's like very convenient we will use this trick again later. All right, so that's like our equation in vector form. Now we can what we can do is to integrate both sides and basically take the average in time of both sides. And so we just formally write this thing you can see that here we have the X DT and then multiply by the team. And here like the equation becomes very simple because then we're just going to have like the logarithm of the species at time big T minus the logarithm of the species at time zero. But because we did this assumption that we start and added the same point these two numbers are the same right so the left hand side of this equation is zero. This trick was useful. And now we have to integrate the right hand side, and you can see that we're integrating are which is like a vector of constant so we can take that out. We can take out also the metrics a because also that does not change in time. And so we end up with this right hand side like r plus a times. Wherever we had a species. Now we have the integral or like the average of the species in time. And the way to solve this problem is to do what is to multiply both side by the inverse of a right in fact minus the inverse of a. So we have that this minus a inverse r is what it's the equilibrium x star. Right. And therefore, like the right hand side which becomes a x star here like here in the first place like where we have our we have a minus x star we bring it on the left side. Now these metrics a will be cancelled by the inverse of the metrics and so you recover exactly what you expect that the equilibrium is what is the time average of the species in time. So that's why these equilibrium is so important for a lot of altera not all models will give you like this beautiful results in lots of other you can show that these holes also for chaotic dynamics of course taking like the average of something that is known periodic is a little more difficult. But you can think of some sort of long term average or like the average will converge to these numbers. And now let's take the dynamics of the limit cycle that we had before. And then we can discard the first part of the dynamics. Let me just show you like the figure again so that you can see why are we doing that. So you can see that we have some sort of initial dynamics that are a little different from the rest. And then it set those in something that looks a little more regular right that the cycle repeats over and over so this first part we call the transient dynamics right so it's like the way they approach some sort of a tractor that could be a point in which case we would have an equilibrium. It could be a limit cycle in which case we have something like what's on the board, or some chaotic attractor but we always have this initial. So let's discard those and then take the average of each species in time for the remaining part of the time series. And if you do that, even though like we're only around this model for certain amount of time, you can see that when we take the dynamics, remove the initial transit dynamics group by species compute the mean density. If we print it, what we get is the set of numbers which you can see here, it's very, very, very close to the equilibrium that we had computed right so so you could work to do this for for enough a time right for for a longer a time then you would have exactly converges to these numbers. Maybe this is a good time to ask Jacob if there are any questions on these first basic mathematics of lots of altera. Yes, so there is a question actually about the stability criteria, which is asking whether there are stability criteria for community matrices, like the one you presented so I guess where you have the matrix multiplied by the density. Yes, there are things that you can say actually this is a good segue for for for the next thing. For example, so imagine that we have some sort of metrics a right. And, and what we can ask is whether these metrics a as only I can bias that have a negative real time. And let's say for example, yes, they had, right, which means what that if I were to choose certain densities, imagine that I choose like my d of x star, right to be the identity matrix, imagine that my equilibrium is all the species are one. I multiply the identity matrix here by any matrix I obtained the same metrics, and therefore this equilibrium would be stable for for for this choice. Right. If I could choose like, for example, growth rates such that all the species at equilibrium one. Of course, what happens is that depending on this axis, I might or might not have stability right so the stability does not only depend on the metrics of interaction. It depends on the equilibrium and therefore implicitly it depends on what the growth rate right here of the system. Is this always the case, not quite right like for example if I can take a metric if my metrics as this property that a plus the transpose of a. Imagine that now we're taking a matrix, summing its transpose what we're going to obtain me some sort of symmetric metrics, a symmetric metrics we only have a real eigenvalues right because we would not have complex numbers anymore. Now, if these metrics, right, we which is like basically the symmetric part of a has only negative eigenvalues. Is that for any choice of X star that is positive, which is only the choice we're interested in these will be stable. Okay, so in those cases, even though we can choose the ours, however we want. And as long as we these are yield positive equilibrium, then the equilibrium is going to be stable. So this is a much stronger form of stability and in fact, this is exactly the condition for like having only in equilibrium dynamics. In fact, like that's the way we are going to use right now to prove global stability. Any other question. There was a question on the actually on the introductory part. And you mentioned the environmental filtering and plate under dispersion. And the question is whether this term are related to natural selection. That's a very, very good. A very, very good question. I guess that yes right like what what do you have in evolving population is that they try to adapt like to local conditions right like by by selecting upon like the various studies in the population maybe there are some individuals that have a certain genotype or phenotype or something such that they are more likely to grow in a given environment. So that's basically the same mechanism right that would lead to that of course like the timescale would be quite different right if I put a lion here like on campus like unless they eat like the students. They will not have time to really a ball but you know a new diet based on something that is here I don't know squirrels. They just would go extinct but but but it's exactly the same principle at least. And then I see a question that says whether there is a way to determine whether there is a limit cycle based on eigenvalues and Simon Levy correctly says no. Unfortunately, I would add no right. So that would be much easier if that were the case. So so you prove a limit cycle. What you need to do is something a little more complicated which is to basically draw some sort of like a box right around like this point and then prove like that the trajectories cannot cross this box and in fact they reflected back like that is like the way to show like that these dynamics will be contained in a certain space. You can draw another little box now around equilibrium and show that dynamics will always go out of this box now the dynamics are squeezed between two boxes right like so that's the basic idea of how would you go about showing like different type of dynamics. Now let's concentrate for a moment on the simplest type of dynamics which are like equilibrium point right like that. We have trajectories and they do whatever they have to do the oscillate up and down and whatnot, and eventually converge to some sort of an and in fact this is connected with the question that we had before. So, so what we're going to assume is that we can find a certain set of numbers like imagine positive numbers that we put on a on a matrix on the diagonal we call this diagonal C, such that C times a plus plus times C has only negative eigenvalues right so this is again a positive a matrix that these symmetric and the such as only real eigenvalues imagine that all of these eigenvalues are negative. Right. Then, our equilibrium point, if it exists and it's feasible, it's automatically stable, and in fact it's globally stable meaning we can start the dynamics from any positive densities for all the species, and the dynamics eventually will reach this point. Right. And, as I was saying, like this is a very, very strong condition for stability because it says, if a stable and this property holds then any multiplication on the left by a diagonal matrix with positive numbers, meaning like the equilibrium will not change the stability right this is called the stability. Right. So, so how do we do this thing of showing that all the trajectories go to a point that is an interesting and difficult question. And in fact, there are very many methods to tell you the truth, because the problem is what we cannot really integrate these dynamics analytically we cannot write the solution for for the system of very many questions like that are so it's not that I can just say, well, let's write the solution of these equations x of t for any t and then I just take t very large and show that converges to the equilibrium because I cannot write the solution. So what we can do is to use what is called the Lyapunov function. And the idea of a Lyapunov function is fairly simple right, we have a system that we cannot write the solution for. What we're going to do is to write some sort of like summary statistics of all these pieces, for example, imagine I take like the sum of all the species or the variance of the species densities. Right. And if I can write a function such that let me make the right. So if I can write a function V of the dynamics right of X of these is my summary study. And if I have this property that these be is always greater or equal than zero. Right. And then be of X star of the equilibrium is exactly zero. The derivative with respect to time of X of t is lower or equal than zero. Let's take lower than zero. Then what happens, we have some positive function right like these numbers always positive when we're not at equilibrium. It always decreases in time. It is zero the equilibrium which means what that we if we wait for long enough. These dynamics will reach the equilibrium point. So, so this is the basic idea behind a Lyapunov function. Now, how do we write a Lyapunov function for the generalized logical model. One way to do it is like let's write it in component form like we're saying V is what is the sum over all the species I of C of I which is like a constant that we want to determine the density of species I minus the density of species I at equilibrium at equilibrium minus the density of species I at equilibrium. Times the logarithm of Xi divided by exercise. Right. So these parts of the equation. You can see it's always positive and it's only zero at equilibrium. That's good. That's, that's nice. And therefore, if we choose positive numbers here. It means that the whole function will be positive right because we're multiplying by a positive constant positive numbers and we're summing them. So, so we're guaranteed to have these to be a positive function. So how do you come up with this equation. That's an interesting question like I like this book by Steve struggles. It has this quote that says that divine inspiration is usually required to be able to write this type of equations. This is actually goes back to the work of Volterra. In fact, like go as a wonderful paper on this in 1977 I can put maybe in the chat or add it to the next lecture for you. So now what what we're left with is to determine whether these derivative with respect to time of be is in fact a negative right and for doing this I think it's convenient just to do easy metrics form. So, so we can write this in metrics form by saying there's like a sum we can put one transpose for that of a diagonal matrix C. And then we have like what here we had X. So we get like X dot, right. This would be the result of this. And then this part is a constant so it goes to zero and then this part here we would get just like minus a diagonal of X star. And then in X dot divided by a diagonal of X. That's right. That's right. Okay. So now we need to do a little trick, which is to say, well, I want to get rid of the parameter are right so so when I write my equation and they say X dot is equal to the of X. Right, but that equilibrium I have that a X is equal to our right a star is equal to our so so to minus our so so what what I can say right now is that then what the D's equation I can write also as D of X. A right now imagine that that that I am just writing are like this way right so then I would get a of X minus a of X star right which I get from here. If I change the sign here I get the minus minus X star. And so even more compactly I can say D of X. A times delta X right the division from the equilibrium. So with these at hand, what we can do is to go on with our derivation. And I think that then I'm going to stop for the day and just take questions. Great. Let me just finish this one second. So we have these. And then we can say one suppose like see, and then here we just have like what a times delta X times D of X. And then here we have minus D of X star a of delta X. So you can see now we have like these metrics that is X these metrics that is minus X so we can write this as diagonal of delta X. One transpose C. And then we have diagonal metrics of delta X of delta X. And then a delta X. And then diagonal matrices commute so we can swap these two. And then we can multiply one transpose by delta X so we get delta X transpose C a delta X. So now, if these metrics see a delta X were to be negative definite, you know that for any negative niche to meet metrics and let's call it right then we have that. These is always lower and equal than zero lower or equal to zero and in fact is zero only if delta X is equal to zero which is our equilibrium condition. And then when we're doing this thing, we're basically summing like these these term right C a component like I J times delta X I delta X J. And you can see that the four like we're always going to some this delta X I delta X J one time when we're doing the I J component, one time where we're doing the J I component. And as such only the symmetric part of these metrics will match right we can write the exactly the same equation by saying delta X transpose. And then here we put C a plus the transpose of C a which will be a transpose C divided by two. So that's how you prove a global stability in this case and that's the derivation that you find here. All right, I think we should like wrap up here and then next lecture we will start thinking really about assembly and what makes a ecological assembly difficult or easy to do. So let's take some questions. Thanks a lot. Thanks. I'm sorry for interrupting you too early. So, yes, so we have time for for questions. So please, if you have them. Just ask them. I think Simon, you want if you want to. Yes. Thank you, Stefano. That was a terrific lecture. I just want to go back as a philosophical question to something you said at the beginning about the neutral theory. And I don't disagree, but there's two sort of interpretations one and using the neutral theory. You're actually assuming that's the nature of the interactions that they're stochastic. And the other is you're saying we know that they their details, but they don't matter at the higher level. The same justification, for example, for using a diffusion model for for dealing with how animals disperse. And it didn't make it doesn't make much difference for what you're talking about. But I just think philosophically, I'd be interested in your thoughts on that. You know, I agree that there are different interpretation of what do we mean by neutral. Because like we could think truly species do not interact. That is kind of a difficult philosophical view to keep like because we see like lions eating gazelles. They definitely are interacting. At least if you ask the gazelle, you will get a yes for sure. But also like we could have maybe like competitors. And to some extent, they sorted themselves such that they have very, very weak interactions, right? Because they try to partition their resources or in fact like they could have basically no interaction whatsoever. So in this case, it's not that like they would not have had interactions to begin with. It's just that after like all these pieces are sorted, you know, we don't really have interactions. Or another interpretation is like these interactions don't really matter. Like are much smaller than other processes that dominate the dynamics, right? Like all of these philosophically like slightly different interpretation of the same phenomenon. Right. I mean, they may matter a lot at fine scales. But when you coarse grain, they may average out sort of due to law of large numbers. Yeah. Yeah. In fact, like going back on that one thing that I like a lot is what if we now instead of having no interaction, we have random interactions. Yeah. Then we assemble systems like that. What you find is that the statistics look very much like the neutral mode. Even though it's basically the opposite of a neutral mode, which like in the species have much, much stronger interactions. But still like a lot of the results that you would arrive look about the same. And the other, but the other technical point, maybe, maybe it's not necessarily that I put in the chat is that you might have a stable limit cycle and the state and the equilibrium point inside is not necessarily unstable. It might be stable with, let's say in two dimensions, there's some unstable limit cycle that separates them and in higher dimensions, some more complicated behaviors that could go on. I don't know to what extent that maybe you can comment that's possible in the lack of Altair. Yeah, that would not be possible, right? Because we have like this much, much simpler model like for dynamics. But yeah, in generic mode that we could have much more complicated like a dynamic such that a certain area goes to some equilibrium. Some other area goes to a limit cycle or unstable limit cycle shop was like the system between all sorts of things. Yeah, it gets really interesting and really complicated. That's why I wanted to do a lot of time because it's just like the same test that you can think of. Anyway, brilliant overview. Thank you. Thank you. Thanks a lot. Any other comment or question. All right. And if you don't have any other like you can ask them like the next lecture, and or you can go over the notes again, and then maybe some questions will arise and feel free like to even email me and answer them before I start next time. And Stefano will give the second lecture of three this Friday, so you have time to think about, ah, there is a question actually in the chat. So how does a constraint on the total population as seen is there in the replicator equation change the stability criterion for a fixed point. In other words, does the community matrix for a system with and without this constraint at the same stability criteria. That's a very interesting question. It's too bad that I'm not talking a lot. I love the replicator equation and I have other lectures on the replicator equation. So, so the question is, what if we put some other constraints, for example, in the replicator equation is basically the cousin of lot Cavalterra. The only difference is that now we're tracking proportions of the density of species rather than the density themselves, right. So we're analyzing that these proportions always sum to one, which means that if one species has to go up somebody else has to go down because they have to send a to the same number in the replicator equation when you're doing this sort of linearization around an equilibrium point, you will always find a direction that is the direction orthogonal to the symptoms right like you have a direction in which instead of like staying such that the proportion sum to one now all the sum to some larger number or smaller number right that direction that is orthogonal to the simple you can discuss because you don't really care. You know that the dynamics keep the keep the system in the simple and the such this direction, the extra direction, you don't care. In fact, like I would suggest like maybe I'll put like some sort of like link in the chat. There is a beautiful equivalence between replicator equation lot Cavalterra. You can write a replicator equation with one more species where species is not a real species anymore. And conversely for every replicator equation you can write a lot from the system by each one of the Great. So, well, I think if there are other questions you can ask Stefano on Friday. So thanks again, Stefano for giving this lecture and everybody to participate and we'll see each other again tomorrow at the usual time at 1.15 Italian time for the first lecture by Marino Ga. So thank you very much and see you tomorrow.