 Okay, before I start with my last talk, just one minor point concerning the AEM when you apply them to time, temporal asymmetric eigenvector maths. In this case, using AEM or using DBM MEM will get you approximately the same result, but in this case you don't detrend your data. So Pierre already mentioned it for the AEM, it's also the case for DBM MEM if you want to use them on a temporal, as descriptors of a temporal structure. In this case, you leave them, you leave your series as it is, you don't detrend it before applying the DBM MEM because this is what we are after. Okay, so now I'll go to a topic that may seem strange, testing interaction between space and time without replication. Everybody knows that this is not possible. You don't test an interaction when you don't have replication. So this is an idea developed by Michel de Casseres and Pierre. Some years ago, I came after that for various reasons, but at the beginning, when Pierre first told me that they had devised an idea to do this, I was simply battled, like everyone should be. And then when he explained me how they would proceed, I told them there were thieves, robbers. They stole degrees of freedom because you need degrees of freedom to test for interaction and there are no degrees of freedom left if you have no replication. So you have to see them somewhere. In the usual case, when you can test interaction, you are in such a situation where you have within cell replication like this. Those replicates give you the necessary degrees of freedom to test for interaction. But in space time survey, you usually don't have replication. A usual space time survey is done this way. So you have the sites, a region where you have those sites spatially organized in some way and then you sample them at year one and then at year two and at year three and so on and so forth. At the level of the site, you don't have replicates. The site one is sampled once at time one and once at time two at one so that actually your ANOVA design looks like this. You have only one observation in each cell, so no replication. So for space, you have in the symbols that I'm going to use, the ones of the paper, I will show you the paper after that, S minus one degrees of freedom, S being the number of sites and in time, time number one, degrees of freedom and you have the error that has to be, that has its S minus one times 10 minus one degrees of freedom. You have no degree of freedom left for testing the interaction. So there may be interaction, but you cannot test it. It is like it's completely embedded in the error term of the ANOVA. This is the usual way of thinking of an ANOVA. So how could we spare some degrees of freedom? And the answer could not have been given to you before, now, because the answer goes back to what we just presented to you this morning. We could code space, or time, and or time, and or interaction more parsimoniously using DBMM. I told you when I presented the DBMM to you that for linear transit, equispaced linear transit, that would be time, for instance, in our new situation here, you got approximately half the DBMM with respect to the number of sites or here the number of times. So if you have 10 years of sampling, you would get about four or five, normally four, four DBMM's, modeling positive temporal correlation. And this is what interests us, the positive. So instead of modeling the usual in an ANOVA of modeling time using dummy variables or simply a factor, which would have in this example of 10 time points, nine degrees of freedom, we could use those four DBMM's each day. So we just spared five degrees of freedom, which is enough, okay, you just need a couple of them to be able to test interaction. The same holds for space. If you have 100 points, you have 99 degrees of freedom for a usual ANOVA coding as a factor, space minus s minus 1. But if instead of those, I use the relevant information contained in all, we don't do forward selection here, contained in all DBMM's modeling positive spatial correlation for a simple example of 100 regularly positioned sites, you would get 49 of those DBMM's instead of those 99 degrees of freedom. So here we just spared 50 degrees of freedom at a time. This time we have plenty of degrees of freedom to test for our interaction. And that was the idea. So, with this basic idea, several possibilities arise. This is the paper, the original paper about this method. So of course, available on Pierre Lejeune's website. So now I have said that you could do this for space or for time and for interaction. So several possibilities arise that are summarized in this figure in which I extracted from the paper just this morning, so be patient. You have noticed or certainly noticed over these days that I never present you the talk that was on the site in the first place because I'm completely unable to read one without modifying it. And this holds to the last minute like this morning. So of course now talks up to yesterday are up to date. I mean, everything you can just re-download my talks. They have been updated according to what I have really shown to you. And this will be the case shortly for this one. I mean, it is already... No, I did not put those because I simply didn't have time. I finished this at 8.50 this morning. So in a couple of hours I have updated this as well. Fine. So, of course, there are those different possibilities of coding for space and time. And all of them are summarized here as in the paper, starting with the normal one. So the replicated ANOVA model one here, which is our basis actually, where you have enough degrees of freedom because you have replication within self-replication. Otherwise, it's the classical ANOVA but unreplicated. So you don't have the necessary degrees of freedom to test for interaction and you only can test the main effects. Do this only if you can reasonably assume that you don't have interaction. Otherwise, you are in trouble. And after that, you can go through the other possibilities implying those DBMENs. So you could spare a couple of degrees of freedom here in space in the spatial coding. There you use the DBMENs. And still keep the temporal factor as it is with the T minus 1 degrees of freedom. So you have spare degrees of freedom instead of S minus 1. You have a smaller value that is called U here. And spare degrees of freedom here reflect also in the interaction because the interaction, as you remember, is obtained, the terms are obtained by multiplying the factor for the main factor between the products. Think of those ones as helmet contrast, for instance, but these are now DBMENs. So you multiply them, you get the number of elements here or variables here. And, of course, what you have spared here will also reflect itself by a smaller number of variables necessary to code interaction. So this is one possibility. Actually, there are two here. 3A and 3B, which are mentioned in the help file of the function that I present later. 3A would be this. 3B being keep all the normal code, meaning the normal factor for space, but spare some degrees of freedom for time. And then you'd have a gray zone here, a spare degrees of freedom, and a smaller gray zone here. So this would be 3B. Fourth possibility, you steal those degrees of freedom for space and time. So both are coded using DBMENs. And, of course, so here you have saved degrees of freedom, and then computing the products here, you save some more here. So you have plenty of degrees of freedom to test for interaction. Couldn't have been stopped there. But then there is still another possibility because each time, of course, you may read the paper, there have been millions of simulations in each situation to see which one was powerful enough to detect interaction and type one error first. When you first invent a method, you don't test if it really finds out what you are looking at. The first thing you test is whether it doesn't tell you there is something when there is nothing. And this is check for type one error. This is the first thing to do when you invent a new method, before verifying if it does what you think it does. Verify if it has the correct rate of type one error, meaning if you set at 5%, your alpha at 5%, it should reject H0 falsely, 5 out of 100 types of things. Of course, you do it over more. And you do this for various levels of alpha, and you verify that it's correct. After that, you go trying for power, and then you run your simulations to see if indeed you can detect interaction, in this case when there is one, space or time effect, or both, when there are some. And then the other possibility here is actually to compute the DBMEN for space and for time, but to use them only to compute their products here to code for interaction and leave space and time themselves the main effects alone. As they're helmet coding or what the fact is. So you spare those degrees of freedom during this computation, so this term and this one are the same, except that here we have brought back the normal coding of the main effect here. And then there is that problem if you remember that when you have a significant interaction, you are in trouble, to interpret the test of the main factor. Actually, you don't. You cannot. You have to slide them through all times. Well, you'd have to test for space separately for each time and test for time effects separately for each side. This is the case. So those two models, 6A and 6B, are actually devoted for that. So you may have a possibility of testing globally with what is called a staggered matrix of DBMENs. I don't have time enough to show you this in detail now, but this is an aposterior test in the case where you have found a significant interaction. Another way of doing it being to run separate tests as I just told you and this correcting for multiple testing. So both ways of doing it are implemented in the function. So to summarize, here I have written what I just explained verbally. So model 1, the standard crossed with or application. Model 2 without. Model 3, space underfitted, meaning DBMENs for space. Model 3B, time underfitted, DBMENs for time. So you have the computation of all degrees of freedom are given each time here. Of course, this comes directly from the paper itself. Space and time underfitted. Model 5, where you compute those DBMENs for space and time, but you just use them to underfit interaction, keeping the normal helmet contrast for the main effects. And 6A and 6B, as I just said, are those cases where you found a significant interaction and you want to go to the main effects, but of course in an appropriate way. The tons of simulations that Michel has done with Pierre at that time have shown that overall considering type 1 error and power to detect interaction because this is the key point here. Testing main effect in such a case, well, you forget about interaction and you test the main effect, but this is dangerous because there may be an interaction in the normal ANOVA and you are not able to see it. And it's not because you don't see it that it doesn't exist. You know, there are small children, when you ask them to hide, they do it this way. They think that if they don't see you, you cannot see them either. So running a simple ANOVA time, a space-time ANOVA, without replication, they go, I just test the main factors so I don't have to care about the interaction. This is flat wrong, of course. You cannot guarantee that there is not one. What is the meaning of a space-time interaction, by the way? Maybe I should have begun with this. Space-time interaction means you can, of course, an interaction is always a symmetrical concept. In some cases, it's easier to interpret from one point of view than the other, but generally speaking, it's symmetrical. And it's the case here. Space-time interaction would, for instance, if you are following the community structure in a given area across time, the presence of an interaction means, for instance, that the spatial structure varies over time. It's not the same. And this at different sites. And seeing from the other point of view, it may be that for each time, if you have interaction for each time, the temporal evolution of the community is not the same. You have at least a couple of sites where community doesn't take the same direction or simply it doesn't evolve in the same direction as the overall community, the other ones. So you can see them both. Either variation in spatial structure over time or variation in, you may have, if you don't have space-time interaction, it would mean, for instance, that the spatial pattern is constant over time. If you don't have spatial interaction, temporal evolution at each site may exist, but it takes the same direction at every site. If it's, for instance, you mean pulverizing, so interpulverized is everywhere the same way, same species disappear or become less abundant and so on, and other ones may take over. Well, in any case, the pattern would evolve the same way everywhere over time. Well, when we have interaction, you may have zones where the pattern evolves in such a direction and other zones where it evolves in another direction. And this is interaction because the way it evolves depends on the place where you are or the reverse, the temporal evolution depends when you are or the spatial pattern depends on what time you are in. So now, after all those simulations, the recommendation is to use model five. So the one where you compute the DBMEMs for time and space, but you use them only as products to provide the variables for the interaction. Those variables are less than the number that would top off and prevent you from using the test. So the simulations have shown that paramount importance, of course, the permutation test has got it by one error, meaning if you have nothing, no interaction, and you generate thousands of random situations where there may be space and time structure, temporal structures, but no interaction, and then if you set your rejection level at 5%, then one out of every 20 simulations of average, or five out of 100, will falsely reject H0. This is correct and this would be correct for every alpha, which is the thing you are supposed to verify. At the time where this paper has been published, Pierre and Miquel produced actually a small package containing two functions called STI models and quick STI. Oh yes, there already was a quick STI there. But STI in their functions were uppercase. I insist here because now I have made them different. So STI uppercase here, STI here uppercase, and the package was called STI. This package has not been proposed to the CRAN, our website. So it has never been put there. Instead, the package is still distributed as an appendix or a supplement to the paper itself. So you can download it from the web page of ecology. It's possible, but at that time those actually, instead of DB now, I have used the term DBMEM, but the functions that were computed were the PCNM, first generation, and everything was kept, meaning that you had a couple of them, model negative spatial or temporal correlation. Several weeks ago, I rewrote these functions, started, well not from the scratch, of course I just took those functions and replaced the part that computed, that resulted to that old package PCNM or whatever to compute the PCNMs, and I made it dependent to AD spatial to compute DBMEMs. So now it computes the modern, the latest version of DBMEMs, and eventually this will be integrated in AD spatial itself. I talked about it with Stéphane Dray three weeks ago and he fully agreed, of course I send this anytime, so it will take a couple of months until it is done, but eventually we'll find those two functions, STE models and quick STI, in AD spatial in a later release of AD spatial. But in the meantime, these plus internally two other functions used to run for instance the permutation and so on are simply two functions that you can source like the ones you learn to program yourselves. So these are of course provided with today's materials for the practicals. You just have to source them and they work. I have tested them on PCNM. So I'll go to an example here using quick STI. STI does automatically following things. It takes your data, it runs, it computes the STI interaction test using Model 5, and then if space or time, if interaction is not significant, it can go back to the usual way of testing the main factors, meaning the normal ANOVA, where if you don't have an interaction anyway or I go to the trouble of doing something else. But if there is some significant interaction, then it will compute the main effects, test the main effects using Model 6, those one devoted to that special situation when you have interaction and you still want to have information about the main effects. So it does this all automatically. In this case, we had for this real situation, we had a stream which is along an outflow of a lake in the station Biologie de Laurentie de l'Université Montréal which is a biological station in the forest, in the lower ancient forest, a couple of lakes and rivers and so on. So you can make a lot of ecological experiments or traps or everything. So here there were emergence traps, you just saw emergence. You put them over the water and then the tricot terra, which have aquatic larvae, they hatch and then they come into those traps, poor guys, and you capture them and you count them, you identify them. So you had 56 species and of course we ran a transformation, a helinger transformation. So there were 22 traps along the stream and 10 periods. 10 periods of... Was it 10 days? Well, actually we had the data for all days, so 100 days. But the variation was so high from one day to another that we pulled together 10 days at a time to have 10 time slices. So it was better that way. So the setup has to be built this way. So your data layout has to be this way, meaning you have the species as column as usual and you have the blocks of sites and the time coming this way. So all sites here, time one, time two, time three, you cannot solve them in another way and provide another vector to say how it is contained. This is a constraint of the method. It simply has been decided this way for sake of simplicity. So just organize it this way and not the reverse or do not mix up times and so on and so forth. And then it's really simple. You ask for the analysis by typing quick STI. You give your data matrix and then you don't give any other matrix. You simply tell the method how many sites and how many times it has. So 22 and 10 in this case. This is all the function needs to construct, of course, the factors and the DBMEMs because it can by itself construct the series of 22, 1 to 22 or 1 to 10 and then compute the DBMEMs after that. It's extremely simple. As you may notice, I did not put this into an object because by all practical means, unless you want to retain certain particular results to be used automatically by another procedure, everything is directly printed on screen so you get that kind of result. And you have everything you gave is here plus a couple of things, intermediate results. So it confirms you what you have done, what you have given, so 22 and 10 space and time points, number of observations which is the product of those, the number of species response variables here and the significance level. Because those are, significance level may be changed here in another argument that I did not change here. So here it has computed, it just appears while the computations are done. It comes on screen here. The DBMEMs for space and for times and truncation level and so on. And it ends up instead of 22 with 10 space code functions and for the time points instead of 10, you have 4. So you end up with really a smaller amount here of degrees of freedom. And then it is testing and it gives you the results. So number of space variables, it confirms everything here and here I have enlarged and highlighted in red what is useful of course here. So the R-square which is not very important for such kind of test but the F statistic and the permutation result. In this case here you see that it is highly significant so there is indeed space-time interaction in this case. If we had tested only the design as it was we would have tested only the main factors and misinterpreted the data. And then since it is quick STI, it goes beyond this and in an appropriate way it tests for separate spatial structures which here are significant and also for separate temporal structures which are also significant. So there is a time and overall time effect. There is a spatial structure meaning along the stream and for the communities and the modification of that spatial structure over time or the reverse. So each site doesn't evolve temporarily the same way. So indeed everything now is seen here this way. And this in the paper has been illustrated by this little layout here where actually we have computed the K-means partitioning of the observations into five groups. Just to see that this whole pattern varies over space and time. So the time periods are here the first to the tenth and the current direction the stream is here. So you see for instance that at the beginning it's quite fairly homogeneous with only two groups appearing one at quite the most places here and the second one here so the communities here each symbol represents a different community you can see it this way. And when you go later in the season you see that it becomes progressively more complicated with other groups appearing some species are not there anymore other ones appear forming different here it's homogeneous again or almost homogeneous but with different groups than here for the greatest part here it breaks down again and so on and so forth so you really have spatial well here patterns along the river for the community in each case there is some significant spatial patterns definitely those groups are not present not themselves at random along the stream but furthermore they evolve through time though so this is a space time attraction as illustrated here and the paper also presents another example derived from this famous BCI forest plot the complete forest plot in Panama on an island, BCI means Barro Colorado island and where in those survey plots they had well also the community or several species that they wanted to follow on different time steps but here we have not the space time representation but for two species that grow that are associated well three species associated with slopes it shows actually the map of the region and the color of those squares indicate where a given species one species here and another here have grown or increased their population or where they have lost individuals and this is in gray so if everything was homogeneous it would be the same everywhere but as you see there are for instance here most have decreased their local population but at some places across time population has increased and here also you have all this region where this population or this species seems to have been favored across time so there are mostly more individuals in most of those subplots whereas in this region it's more mixed and you have quite an amount of places small places where this species has actually decreased so again here different evolution of the situation across time so the spatial pattern evolves over time space time interaction okay? questions? yes? do you understand how you put time in the DBM? I mean, I have to talk about the asymmetrical the man are symmetrical okay, two things technically it's as easy as I showed you during my, for my slides, okay? so obtaining DBMEMs I don't think this is a problem for you what is your problem is why do we use DBMEMs instead of AEMs? this goes back to what I quickly said just before this talk that, you know, by all practical means if you have a one-dimensional asymmetric eigenvector maps are mainly for situations where for instance you have a transect evolving over time so here you have actually a two-dimensional layout with one dimension for space transect across river or across a stream or something or a current in the ocean or whatever and the other dimension being time there, or of course those networks the river networks that Pierre presented to you so this is where AEM are really efficient they stand out and they give better results than DBMEM but in a unidimensional case you have one point of time followed by another or in this case in the river where you have one trap and then another and then another then the difference between AEM and DBMEM becomes nonsignificant it's really practically the same but of course you certainly don't detrain in this case nothing is detrainted you just go straight through with the DBMEMs you use time you create one vector with one, two, three, four, five and you run a principle coordinate analysis you run the distance matrix it's exactly the process that I showed you at the beginning of my talk, exactly that so it's that simple other questions? okay, so Pierre still has some elements for you to present something very important but for me, as you know now we are given the plan this was my last talk here for this course so of course I wish to thank you for your attention your interest and everything we have already done together here and I think maybe at some point some of you will be interested not only in using those methods but also in pursuing this adventure of developing numerical methods and people like Pierre are those kind of people that call you to an adventure and you simply cannot resist so I really think maybe some people will hear this call as I heard it already many years ago looking to go for this adventure and as you know I responded enthusiastically to it reason why I'm here so thank you everybody thank you