 Ok, zato prišli. Zato jaz sem informatil, da je odmah je visita s Sissa, in se je prišli, da je tudi svoje prišli. Zato smo se zelo organiziracijo tudi vse, da je tudi tudi. Kaj, da ne je zelo, da ne zelo, da bi Angelo, Rosa, kaj so. in ki se všetko ne zelo, zelo se zelo izgovorite. To je vse servic, in ki se ne zelo, to je nekaj dobro. Daj. Čakaj, da smo počeli, počeli smo počeli, da smo počeli, da ne zelo. And now we can start with the last lecture of the- Recording in progress.爪shova. Thank you Mateo. And today will be the last lecture, but not my last class period with you, tomorrow we will be online, at least I will be online, you'll be here in person, and I'll be doing some work on a laboratory. Mateo, are we... Are they gonna be in this room or in the other room? Yeah the other room. So, the format tomorrow I'll be on the screen, zanim pose.... ... Need to travel back and need to be close to the airport but I will... ... Be there for that morning, go over the... ... Prior laboratory explains some of the concepts. All of you should have seen in Slack. There is a Dropbox file request that I made so each one of you... ... should do that and you should also see, because if it says viralecologylab.pynb, b, nekakva ne bila ko sem. Zobr Ti ne zvoljte, na laženju ne baga mi vse. Takva na pobah, ki je ovo, a potem da bilo laba in tko si ovo vsi koncentruva pomekljev delšaj ki vse. Isto vse lahko, da se izvajem, da meslo, ki ga taj ne smite, imam je, da taj ne predstaviti in baš kdo, ko ga taj za sreč, da so, mi moje, ta, je taz, taz, če je srečave, da ima se za sreč, ne so do stream, do to znači. vse tako, če je dobro? Zato, kako je jaz na prilj nekaj, pa jaz je vso nekaj, tako, da je dobro, zato, da mi se našli očnje, in da jaz nekaj nekaj, pa jaz nekaj, pa jaz nekaj, pa jaz nekaj, smaljo vso, da mi se našli očnje, zelo, da imajš, nekaj, dedušila. Zato prišel si, da ješ to dobro, in kako se prišelite o zelo. Pozirovati, da se prišelimo, da ješ to dobro. So, nekako ne kaj sem vzljavila, je pa všem, da neko se prišelim, svoj bomo, da sve zelo, pomeče, da se prišelim, da se prišelim, prišelim, prišelim, prišelim, prišelim, odrga vzpečno, daj mi nekako skupaj, bo vidimo, da imamo všeč tudi sprač. Slepo sem tudi vzpečno, da je tudi, da težkje tudi. Kako je težkje tudi, kako je težkje tudi. Zelo včasno, da sem pričel, umožel sem tudi težkje, in pričel sem, da je tudi pričal, da se vse vsegače, Skupaj s sr-bom s idolimi modelji in sr-bom s mnimi, in tudi tudi o heterogenejti. Nelj ne vrpam do heterogenejti, vzahaj sem da choses češi. In potrebno idem lepo tudi srečen, nekaj ne srečen. Zelo sem vrpam do heterogenejti. Heterogenejti. Tih se ga se nespranjeva. Tudi se nespranjeva. predsivitega hodina, zelo v počатku, a potem ga za vsem podjel na razdaj po razlušoven后ov, o začinaj ga so besedili in pozvodati, je našto nekaj, ne bo in stojno, ali thaーノ o zeljutだけ ne je sredi v kontra na zelo nekaj način. Znači so v upsetnih, in neki vselj, It will just be the board work, and hopefully interactive. So, to start with heterogeneity, I want to recall that I talked about a case in which we could imagine viewing a population in terms of some different vulnerability to infection, and maybe it was like this, some exponential, maybe it was like this, so there was a modal, but there was some variation, right? Zdaj se sem počutila, da se skupnili počutnih, tako, da nabijevamo od mene v tečnih, ko je z načinjela, beta si, kaj je nekaj vsečnaja, beta si in vsečnij, vsečnij, vsečnij, beta is, vsečnij, vsečnij, vsečnij, z njegaj in počutno izvanjujem, tudi sem nekaj nekaj tudi poslusti, ali nekaj tečnij, in da se splohem, ki je vsečnje vsečnje počutno. Uč sem skupnil počk Daniela počk. Me veilj nič na svojo odmobnost in možemo pričasno poločiti, jebek njega je nekaj da sem pa bojila in odbali počk. So nekaj saนะ, ki še bi nekaj na vašče lepovalo, n transplant je odbalo infekštjek in stojev. in zelo, da je zelo, zdaj smo je tem, da imamo končnje obizivno, tega izloži tudi, da se je nekaj odradi, kot dobro je nekaj nekaj odradi. Zelo, da so se včebodi podstavili download, zelo bi smo poirmati, da se je to napravil, je toле i naredil, da nekaj so poem, pojem nekaj način, a jaz ne odradi in v vizibniči. je the same. So we can't change the mean, we're just reducing the number. But the more variance means there's more things on the right to pull up faster, the more variance there is, the faster the mean moves. So we get something like this, we got something like this. We went through that yesterday, where we had the accumulation of all these different infection classes moving into the infected, the septoclast moving into the infected and then these recover, and that was the model. Zelo sem jasno vzoutval, da sem sem je nekaj razgleda o časih našelji, o časih vzoutval, da sem je nekaj razgleda, da je, ki jaz sem spremljala, čas je s, čas je epsilon, a nekaj je e-epsilon, da me izgleda vzoutval vzoutval vzoutval. Tako, da sem potrebe vzoutval. Da je to nekaj za 1, In tudi je, ki je vso moj pridunj, tudi prav, ki je da prišel, da je vzicega počiča, je da je vso počiča, zato počiča odkazati. Zato je se zelo, da je to počiča, da je vso počiča, da je na zelo. I bringam, in ne v počiča, da je s ednačno vse, da se je srednje, in da stajem, S of epsilon d epsilon, we would get the total number of susceptibles left which is less than one. So I'm really trying to say this is sculpting and I'm using that diagonal intentionally. So let me actually go through and then do that looking again at this distribution and focusing on a particular class. Because in some sense, and this is a big caveat which I'll just mention before I make the transition, this isn't formally speaking, although we have a structured population. We don't have any way in this model yet for things to move across the epsilons. We're just decreasing this category. So each one is really moving independently, although they're coupled by the fact that the strength of how fast they're depleted is connected by all. So because of that, we can actually think in that particular category, if this was our focal epsilon minus beta s of epsilon i, we can think of just taking this derivative and what we see, and I think you can see how this is going to happen, that we are going to have something which then, if I s of epsilon is some s0 of epsilon e to the minus beta epsilon and I'll actually write it this way. This is some force of infection, cumulative force of infection over time. Is everyone see that? We can do this for any value of epsilon. It would apply to any. Just saying whatever the initial value was, it's going to exponentially decrease because you can see that whatever this rate is, is a proportion to its own value. So it has a character's exponential shape and they all share this common cumulative force of infection. And because they all share the common value, I could immediately just replace that epsilon with epsilon prime and find for a different value. Now my claim is that this sculpting process has an interesting feature depending on what the distribution is, and this allows me to introduce the Eigendistribution concept here of sculpting for heterogeneity without necessarily using such a complicated distribution. So that's my caveat. So I'm going to do this because you can get the concept, but then I have to tell you at the end what other things have this property. So you can see that this one is being sculpted downwards, but I could have chosen another arbitrary epsilon prime there or epsilon prime there, and they're each being sculpted. This one is being sculpted faster, a little bit less fast, even slower. They're all facing the same force of infection. There's been the cumulative number of infections that's coupled, but we're seeing it differently because we're scaling the betas in some way. Therefore, if we take the ratio at some later time, we just take this ratio s at some t of epsilon divided by st of epsilon prime, we're just going to have the ratio of these two things, but this ratio must be something like e to the minus epsilon prime over some epsilon characteristic. Whatever the prefactor was, I don't have to worry about it, it cancels out. These differences should be exponential at the start, and now you see we have something like e to the minus epsilon minus epsilon prime times some other integral beta i dt. I won't see that because I'm just taking the ratios, and this is shared so I can pull it out. Which means that this is just e to the minus epsilon minus epsilon prime times some funky thing, and if I set arbitrarily this characteristic value initially to be one, we have one minus some force of infection up to time t. Am I still okay here at some point? I think so. So if I have that, you can now see that this is itself just some epsilon characteristic in time t. Just the inverse, though I don't know why I have, I feel like something went wrong with my signs, but maybe not. As you can see that basically we preserve an exponential shape even though the characteristic value is changing. The reason, again, is that each one of these is independent with a rate that's proportional to its value, so we tend to get exponentials. If we start with an exponential, all we're doing is shifting the characteristic value like the way I said downwards and to the left while retaining the same shape the whole time. Now it's true that this does retain its shape, but not all things retain their shape. So if we started with something that looked flatter initially, this is going to start pulling down and actually look more and more exponential with time. If we have something that is gamma-like, it actually over time will stay gamma. Gamma stays gamma. And it turns out gaussians stay mostly gaussian, but I can't guarantee you beyond the gammas and the exponentials that they will actually and gaussians that have narrow windows that they actually are eigen distributions. So this thing can change the shape, so it's really sculpting, but in some cases it's sculpting while retaining the shape, so we're miniaturizing the distribution along the way. Okay? So the thing that I wanted to say here and just wrap up this section because I felt like I started it, but I wanted to close it, is that we often make this assumption of homogeneity in which all we're caring about is the number of individuals who are left through the epidemic infection process and then we get these terms like herd immunity, susceptible depletion. It's all driven in some sense by how many people are left. Here, these individuals are different and heterogeneity can have interesting effects, particularly because it's drawing down individuals here more quickly than over there. And the consequences as you sculpt or miniaturize, but really sculpt, then you can have the infection slow down in part because conditional upon being left, you are less likely to have a high vulnerability. You're more likely to have a lower vulnerability. Has anyone here ever heard of mortality rate plateaus before? No one. OK, I'll just ask a question and it helps to provide some context. So if you have a seven day old fly, you breed many flies and you track them and then some die every day, you look at a seven day old fly and then you wait for a while and you have a large number of flies and you get some that lasts to 70 days. It turns out that if you start with a very large number of flies, about a million or so would do. You make it to 70. Now I ask you if I have the seven day old fly, typical seven day old fly, which one is more likely to make it one more day? The seven or the 70? Ever understand why? Anyone have a thought? Say again? Seven. Seems more likely, right? It's a younger fly. Turns out the 70 day old fly is more likely to make it to 71. Then the seven day old fly is to make it to eight. Part of the reason I think is related to this. You can imagine that flies varied in how fit they were for all sorts of reasons, maybe because of some initial differences or maybe even they had different amounts of food along the way. The ones that made it to 70, it turns out, are probably sitting on this side in many distributions. If I'm too close to this side, the ones that are frail have already gone away. And the ones that are less frail are not. You see this same effect in cars. If your car has made it to 20 years, probably make it to 21, whereas the two-year-old car can still break down probably likely before. Three light bulbs, it's sort of a generic feature in which, in fact, it does this strange thing where you're even better off, so usually it just slows down in terms of the rate rather than actually going lower. The reason why that happens is because the failure of this fly, the death of that fly means that what are left are things that tend to be better off or more fit than those that have died. So you're selecting and sculpting a distribution. Here you're sculpting a distribution, but what are left overall are less likely to be infected in the future, which means that you don't necessarily have to go as low in terms of susceptible depletion, you're depleting susceptibility. In other words, you're depleting vulnerability at the same time as you're removing the numbers. You're changing the composition as well as the number of individuals. OK. Any questions about this? Because I wanted to just wrap up that it's possible that sculpting can preserve distributions, but also it can drive other distributions and attract them to it. Any other questions or any questions about that? OK. Everyone's satisfied. If you want to read more, I'll maybe post the paper that goes into all of this, but this really explains a lot about how things were discussed in terms of heterogeneity and herd immunity thresholds. Science paper from 2020, talking about how age and activity differences can lead to different kind of outcomes, and why people became very cautious, and there was all this debate early on where people tried to make this inference of strength and then try to infer with size. What's interesting in these models is that the initial strength is exactly what you would have assumed from a standard model, as is the speed. So you make these inferences, but in fact, this sort of stuff only happens later once the sculpting unfolds. So it's one of those things that initially, if you're trying to make inferences and why people had a lot of doubts, how many people would be infected by SARS-CoV-2, people were almost using their intuition about how important things like social networks and even things like heterogeneity were in terms of shaping the final size. OK. Good. Go into a race and we've done that one. You can just look at that corner for the next hour to see how far we're doing. I know you're getting to the end of a long course. We're gonna make it. We only have two more lectures today and two more tomorrow. You can do it. I seem to be doing more work than you this morning, but so tomorrow you can do more work than me. OK. I want to talk about behavior next. In these models, SIR like models, we're often making an assumption that, again, that betas, these parameters, these transmission parameters and the recovery rates are not changing with time. So we have this feedback. We have some transmission rate beta. We have recovery rate gamma. But who, amongst us, behave the same way in this whole pandemic? Mostly, I would say, the first approximation, no one. We're in this room, we're wearing masks. There's supposed to be open windows, they're not, but that's OK, it's a pretty big room. We've made all these changes and for a long time we also even reduced the number of interactions we had. Remember in the class yesterday I talked about beta as being the product of the number of infections, the number of interactions one has, times as the probability that could lead to a transmission. So if I physically distance or social distance, then I'm reducing contacts. If I wear a mask, I improve ventilation, wash my hands, but mostly the masks and the ventilation and the filtering, then that reduces the probability that even if I'm close that there's an infectious contact. So this is not changing in time. This presents challenges. So we have to just be very clear this is epidemiology plus behavior equals it is an open field. You will find just as many articles that say like 10 challenges in linking epidemiology and behavior as you do actual papers that make some progress whether empirically, theoretically and even more rarely a combination. Whatever model we do so I'm going to explain today in some sense I've isolated the heterogeneity effect and now I'm going to isolate the behavior effect. In the long run I would love one day to come back a long time in the future. It's been a very long 10 days so I need a break but sometime in the future maybe I would say talk about both how to combine them. But I want to just explain each separately. You could think that I mean say people wearing masks or taking precautions have a smaller epsilon in the heterogeneity framework than people who are not taking precautions. In the sense that infection rate depends on behavior. Yes absolutely but the epsilon may itself be a function of behavior. So there may be people, so there may be some intrinsic differences and then we may even change our behavior. So I view it as there's likely intrinsic differences which specify maybe our intrinsic epsilon is different. It could also be different vulnerabilities to infection but to first order let's say that it's largely driven by behavior. On top of that what I'm about to say is that it could be maybe we did this model in which there was heterogeneity and that implicitly gave behavior it could also be that beta is some function of time and you often find in economist models they would just look for these externalalities and say well beta changes like this and that's what happens maybe this is because there was a lockdown so it's all external. So I want to talk about today is that somehow beta is a function of somehow what's going on. So I just want to there are different ways that we could play this game and I'm going to play it in a slightly different way next. And the way I want to begin to play this and again there are truly different ways is I want to imagine somehow that the fact that there are infected people could negatively feedback to this rate of beta the fact that there are recovered people could negatively feedback. I view this first negative feedback loop as short term awareness so if you can't read that I wrote short term short because there's a limited time where people infectious or infected if I somehow am aware of how many of such people are and that changes my behavior that's in the short term but I can view this feedback as long term and if I add them up if I include the article long term because we could be in a state where there aren't that many infectious people around but we remember that it was bad because of the accumulated impacts of the disease and we keep changing our behavior I'd say this room's behavior reflects more long term awareness than it does short term awareness actively speaking I don't think you're aware of cases going on here at this guest house there have been a drop after this Omicron wave and yet people are still retaining this behavior even it's not exactly correlated to the level probably you're going to wear a mask completely tied to what this is but it can be both so how would we include this awareness we have this force of infection which we've written like beta si and one way to do it would be to reduce this somehow and obviously any choice I make here I just want to make sure you view this not in the same ways you view even something like F equals M a stat mech set of equations it is a set of equations that are meant to represent a phenological effect and obviously we still need to understand more about heterogeneity how to see and include different factors including individual differences but you can imagine let me see which one I could do some short term inhibition where maybe I discount this rate by some factor or some long term where I discount by the combined accumulation of cases and what I am going to try to do next is explore the consequences again not because I think things exactly like this happen but because there are effects of things like this that are in fact I think generic so if that I hope the difference is clear I don't expect that this is precisely the right equations but equations like this have effects that are shared and we should understand what are the effects for example to things like whether or not the disease takes off so what I want to explore in using this kind of model to start is there an outbreak what is the herd immunity threshold what is the final size so some basic questions if we were to use a model like this any questions about the framework here that we are going to use I am multiplying beta si by that term different classes of models so I can play with one or the other I just want to point out that when I include R I am going to call that a long term awareness K is going to be some linearity exponent so if K is 1 I am assuming we reduce just by the proportion so if we just discount our force of infection by 1 minus I plus R whereas if K is 2 or 3 or 4 then we have an amplified effect in this model so I just want to have some flexibility there to have the right functional form but I can increase sort of the way that people react how stiffly they react to these changes good so let's take a look and imagine that we start off at our disease free equilibrium again 1 comma 0 and again we have S plus I plus R is 1 so we only need to keep track and we can write down and I think it's better to start with the long term awareness I can write S dot is minus beta S I 1 minus I plus R to the K I dot is beta S I 1 minus I plus R to the K no obviously I need to have them recover ok, so we have that kind of baseline model but keep in mind that the recovered or even I plus R is just 1 minus S because S plus I plus R is 1, I plus R is 1 minus S so this is keeping track in the same way when we tried to figure out herd immunity thresholds it's just keeping track of anyone who has been infected whether now or in the past the fraction of people who have been infected now in the past is just 1 minus the people who have not been infected 1 minus 1 minus S is just S which means that if I assume if I assume K is 1 so I just play with the linear model so I can do some calculations and then I can extrapolate what we're up getting a different kind of model which is equivalent to the model where we had an exponential distribution for heterogeneity right and we're back to changing the fundamental nonlinearities so that in some sense the behavior model in this simplified sense is changing the nonlinearity of the model this obviously helps because we can do a number of things relatively quickly we can already begin to see that when we have this linearized version and long term awareness then we can ask this first question is there an outbreak and is there an outbreak just means whether or not I dot is greater than 0 which really means if I pull this out which means because initially S is really 1 approximately 1 all this means is as before R not greater than 1 where again R not even in this awareness model is beta over gamma and part of the challenge here and I'm going to try to explain the implications of all of this is that when we have awareness at the start this is a very small number so the impacts of awareness don't kick in or influence whether or not the disease takes off which means that it has the same strength and in fact the speed is as before beta minus gamma I just replace S squared with 1 and I get the speed so I get the same strength and the same speed now let's say you're working in public health intervention and you have gotten some data which is allow you to measure the speed and then you deployed your epidemic intelligence service officers which are real things and some friends are such officers in the US and I'm sure they're equivalent here in Italy and other places and they go and try to figure out the time between the infected to narrow down gamma to get a sense of the generation interval from that then for this strength and then they use these final size relationships and make predictions and the awareness is going to kick in and people are going to change the behavior later so now they might reach the same conclusion that it's going to take off, we're in trouble we have this R naught the question will be what value of the final size will they predict and if there's awareness will that number differ so far it has had no effect I'm assuming you're all following me either in wrapped attention maybe wrapped attention, I'm betting on wrapped ok so how do we figure out a few things, first of all what's the herd immunity threshold that we can just read off the herd immunity threshold is when s is equal 1 over R naught that's just like before to the one half power let's think about it in a traditional model if R naught was 4 we'd have to have s go down to 1 fourth before the disease would start to slow down which means 75% of the people would have to be infected before and that's a lot there'll be overshoot here it says we only have to get down to 1 half so the disease starts slowing down well before it would in the absence of awareness awareness has kicked in and slowed down this disease in the same way we would have reached these conclusions for the heterogeneous model so already predictions about how fast this thing is taking off might be the same initially but they would start to diverge I'm going to give you these clues and then make the graph in a moment so it's also populated so we did this one same, speed and strength are the same different herd immunity threshold and what is the final size is right here below it that way I can keep all the answers a little bit closer does anyone need this piece so we can do the same trick as before and write di ds is equal to minus 1 minus gamma over beta s squared plus which means that I can write I is equal to minus s minus gamma over beta s plus some constant if I move it to the other side and I integrate okay initially I was 0 s was 1 which means we should replace this with 1 plus 1 over r0 everyone following which means then at the final time so this is how we anchored it but we're interested in a different condition when the disease is out i is 0 and s infinity is unknown we're trying to find the final size of the outbreak in which case we again have 0 but now we have something a little bit different I'm going to move this to the other side and write s infinity minus 1 equals let's see if I can get this right s infinity minus 1 equals 1 over r0 1 minus 1 over s infinity which means this is the same as writing 1 over r0 s infinity minus 1 over s infinity I just move that around and you see the s infinity minus 1 cancel on both sides which means that the final size is just 1 over r0 this previously was the herd immunity threshold that's nice I've just shown that when we have this feedback long-term awareness that for every strength the awareness model must be less in terms of the final size than the model without awareness because the model with error awareness got to this point and then overshot here this is its terminal point ok I'm going to erase this section and just try to fill in a little bit more so I can wrap up this idea so what have I showed you here using this kind of model framework what I've showed you is that if we have I'm going to do two things I might even skip this other part of this derivation I'm not that excited by it right now and I'll move to one more thing what I've showed you are two-fold here's r0 here's 1 here's outbreak size outbreak which is 1 minus s infinity because s infinity people not infected 1 minus s infinity is anyone who's been infected and before if this is 1 one-half here's 2, here's 3, here's 4 this is how it's going to look with this long-term awareness model this was the herd immunity threshold but that's the final size and this is how it's going to look for a standard model and if I were to increase that value of k I can't give you a closed solution but the more that people respond in this way this is k equals 1 long-term k greater than 1 these can have this kind of effect the more that they're responding to it which means you see there's a gap here's this little gap and that little gap might not be so little depending on the value, it could be quite big when or where did that gap show up? if we looked at time and we looked for example at the cumulative infections in other words, i plus r in a standard model we would get this and obviously this has to always go up because I'm adding things together so we moved into the category it's always going up and I hope I have one other chalk color good, this is standard the point of what I'm trying to tell you and why this tends to matter a lot is that what's going to happen in the awareness model is that initially they're going to look the same later they'll start to diverge so you will tend to find this is like the long term awareness model this final size which now I'm plotting here if I change r not this becomes the extent to which the public health authorities overstate the potential size of the epidemic because they can't necessarily see how much feedback there's going to be later on even without external policy drivers and for the most part predictions on outbreak sizes from these conventional final size relationships from the mean field tend to overstate size of epidemics and what you can see is even if you disagree with me on my particular form the consequence of an awareness model is it tends to have the same conditions here but because you miss negative feedback later on generically speaking if we don't take that into account we over predict the size of the epidemic ok now if there's short term awareness the effects are going to be smaller insofar as I use this particular I've even erased it the particular term where I do 1 minus I in part because the prevalence of a disease tends to be a very small fraction but if that somehow is magnified or constrained there's nothing to say in the short term I can be very aware and even a small percentage can lead to a big effect so it's clear this model is a bit too simplified because it doesn't necessarily scale with the maximum prevalence these are the two ideas I wanted to get to in this first part of the behavior section any questions about these awareness that has a negative feedback tends to change long term outcomes but in doing so the near term tends to be the same which leads to generically speaking potential for predictions that are actually higher than when they end up being observed question or I'm going to move on you didn't draw the short time effect on the cumulative infection yes it's going to be over there the long term has a much greater effect than the short term but at long time they are still so if I had a third color it would go over like that but they would still be better than the standard ones other questions ok now I'm going to erase all of this is that alright fine so I'm going to keep that diagram up there because I'm going to modify it a little bit here at the end then I have one more vignette and then I'll switch to the generation interval concepts we'll see how we're doing good ok so I've shown you this kind of model but I'm now going to show a slightly different one and to do this I'm going to make one slight modification that gets things unfortunately a little bit closer to some realities where I'm now going to be explicit about the fatality of the disease and there are even some ways I thought I was going to include it there I just didn't quite have enough time and there's not enough time in these lectures to talk about some other features having to do with asymptomatic and symptomatic but clearly with SARS-CoV-2 we have an asymptomatic route in other effects that aren't just related to severe cases nonetheless there are cases that can sometimes lead to fatalities and that's evident and that fatality index on a population as a whole can vary remember we talked about the infection fatality rate versus the case fatality rate in the infection fatality rate which I'll write here as f says the probability that someone who's infected ends up having this fatal outcome so that there's a death as an outcome for SARS-CoV-2 as a population a little bit less than 1% is probably appropriate it can be it is strongly varying with age so you have probably 75 plus fatality rates 8-10% that's very high it drops down to very much lower than 1% but even lower for people your age yes there are comorbidities and coindicators many other indicators that might relate but the fundamental one is age I can't add all those features into this model what I want to do is simply add this component and ask the question if awareness of fatalities and really I'm going to actually put awareness of this awareness of new fatalities changes our behavior because again I want to go back to something that I harked into in the early lecture we haven't had all these fundamental changes in behavior in the way we're operating society because of cases alone if we did then the seasonal beta coronaviruses every year we totally shut down we don't do that the consequences are because of the severe outcomes so I want to link it to the severe outcomes and ask the question in this kind of model does something else happen so what I'm trying to do in these lectures is introduce concepts in hopefully the simplest way possible and there's some new things that happen with SARS-CoV-2 because in some sense if you believe everything that I've said the last couple of days as being a reflection of reality then you would say that when we measure these strength and speed we get something about final size outcomes and again with heterogeneity or behavior we might be wrong but that fundamental concept of a disease going through, going up and down has still remained is everyone following my logic that it's just changing how high the peak was or when it hit or how long it lasted but fundamentally we get an up and down cycle that's it relatively symmetric and relatively smooth so now I want to ask if we had a different kind of model and you may be frustrated because every time I make a new change I don't keep the old ones because I view these as different factors that are the things you should be aware of when you're actually building these models and how to combine them as still an open question so let me just write down this and I'll take the question so I'm going to go back because everything I just told you about heterogeneity because I want to put in awareness in a slightly different place I'm going to call this incident of new death delta of T otherwise it's a standard SAR model I'm just making explicit that we're going to split recovered individuals into those who recover and not just remove but recovered in fatalities if you could yell it out because otherwise we need the mic and I can repeat it my question is in the shape on the previous models all look the same my question is well, I have not studied so my question is if it's possible to recognize between the different models or we could say I mean empirically we see what is happening and we want to say it's this model or this other model it's a possibility of differentiating between the models and the parameters of one model to fit it I don't know it's a great question I know that generically there are many ways to get an up and down shape I'm about to show in a moment a different kind of shape which I know we can't get with another model but I know at least two ways to get it so that would say it's a little hard for me just to say given that shape I can immediately infer the process these things tend to be entangled was it awareness it's like a lockdown and when people build these models and they see changes in behavior and then they make conclusions that lockdown saved X number of lives that's a conclusion by assuming that the change in the interactivity was driven entirely exogenously whereas if I say well some of this was already going to happen we know that people tend to be aware and then change their own behaviors and in fact you can see for example people started to wear masks are effective mask policies have not always been shown to be effective because when you look at the data and you say here's the date when the mask policy was set and you say how much change in mask wearing people started to anticipate this before whether because they thought the policy was going to happen or because they know that they need to do it and then on top of that people don't always wear them the right way they wear them below the nose and all sorts of ways so then you say did the policy have an effect so when people make the assumption that it was all the policy they forget that there's this other factor too and they probably lumped it together so the reality is that two things I think are happening I'm answering your question maybe in a long winded way but I think I'm answering it one is that things are happening together and it's hard to extract both and the other is that sometimes they are in fact multiple mechanisms to generate similar kind of qualitative things qualitative patterns and I don't think we're at the point yet where we know how to absolutely identify their contributions of either and we're not quite there yet so I would say it's a good question but it's not yet one that I can answer yeah the heterogeneity and the behavior both of them could be interpreted ds squared behavior instead of s could be interpreted in heterogeneity or behavior so my question would be more like there is a possibility of differentiating between ds or ds squared yeah so for that one in terms of what is the appropriate model so let's talk about kind of ultimate and proximate explanations yes you can probably show and I'm about to show you that we have evidence that these standard sRM models are not right and when people actually want to do forecasting they don't use them because they would never get it right or you'd have to keep modulating things so you can say that the effective nonlinearity is changing and it's not necessarily the original one then again if you just keep reloading all of that into beta you could just keep changing beta of time and incrementally just have things tracked to fit but I would say that we know that it's not as simple as these beta si models also any kind of social spatial structure we expect a different nonlinearity but I don't know then if that's the awareness driven or initial heterogeneity or both and I can't tell that apart which is what I'm trying to say but now that with that good question I want to go back to something new that I want to try to introduce if you allow me to say this is the incidence of new fatalities you could say it's the number of new deaths per week for example we should think of that as delta and we have this negative feedback that it's possible that instead of using the standard model I might replace the beta si with something like beta si 1 plus delta over some characteristic level again to the k and I'm just using this k generically whenever I want some flexibility in nonlinearity what does this mean it means that if there are no fatalities I have my standard model there's some critical awareness where public health authorities or individuals start to say whoa that's way too many on this weekly basis and we see that starts to decrease our interactions immediately sorry delta is d we should think of this as the number of new fatalities per week per day let's say per week as being some indicator so it's not the cumulative but it's what's going on right now so I'm aware things are bad and I'm aware that things are bad aware of the bad things so I'm actually using the severity part to be the indicator for what drives me not the case part and if you look at reactions there's all this they're just cases and unfortunately there's also so many unfortunate things there's a lag between cases hospitalization fatalities I won't be able to explain all that today but here I have don't have an intermediate case infections become fatalities on a short time scale I will explain at the end what happens when you introduce a longer time scale so now what I'm going to do is rewrite this and then ask does anything new happen do we just get this inevitable climb does it just change for example the herd immunity threshold or does it change the final size or something else so if we can rewrite this now s dot minus beta i over 1 plus delta over delta c some characteristic rate to the k and remember I'm going to call this incidence of new fatalities this delta everyone with me we start off in the disease free equilibrium at the disease free equilibrium i is zero we have delta is zero you can begin to see that we're going to have the same takeoff problems as before I'm not even going to do that thing that I've done many times already so now I want to know though if something else could happen and the standard way that I'm interested in seeing if something else could happen is I've been setting i dot equal to zero and interpreting that value as the herd immunity threshold right that the reason the mechanism why we have this downturn in cases is that the susceptibles have been depleted in a behavior model that if it's long term awareness then the susceptibles don't have to be depleted as much because everyone reacted and because it was a long term kind of reaction then effectively the beta decrease which means we don't need to deplete the susceptibles as much to reach herd immunity in the heterogeneity model because of the sculpting we also don't need to reduce the susceptibles as much because conditional upon the fact that you haven't been infected you have a lower vulnerability ok I claim and this is going to be a not a proof but a claim and you can read more I'll put the paper which shows it then a new thing can happen I'm going to set i dot as zero ok what happens s i over one plus delta over delta c to the k is gamma i new infections balance recovery and you can see that in the standard way of doing things we'd erase the i's and we would get our herd immunity threshold but here we get something a little bit different I'm going to make a different kind of claim instead of solving for s I'm going to make a somewhat insane kind of assumption that I'm interested whether or not we can get a peak very early on in which infection stop going up even though most people are susceptible everyone understand that bit of a crazy claim rather than solving for s I'm going to assert that I still think s is approximately one should be somewhat confused but maybe intrigued if s is approximately one what that's saying if I divide by gamma is that r naught is one plus delta over delta c to the k which means that r naught minus one sorry equals delta which remember gamma f i which means that we could have a quasi stationary state in which if we get to a point where the number of infected individuals is something like this which means we have our number of fatalities and we have a rate of that's being generated by a certain number of infected individuals even if r naught is very high if this k, if we react very strongly this is some slight perturbation to just that value of the awareness when we start to change behavior and if that happens I claim which one of these can erase I feel like I need them all where can I erase something this one I think I can erase I claim that if I look at time what can happen that the disease can take off depending on what k is I might get a little overshoot but I could get I, my I signal to look like this where these values are very very small compared to s the reason why we're plateauing here is not because we've run out of susceptibles but because there's awareness of how severe it is and people take measures to reduce their interactions and you get this balance this can then go on for a very long time you can get the emergence of plateaus or shoulders rather than just single peak again if I were to put this into perspective and I put s over here in one this whole time s looks like it's barely has changed this is totally unlike the mechanism by which the reason why we get this single peak and going down is because the susceptibles now drop so radically and now we should see this just heading downwards this actually was one of the most relevant and for those working in this space the one of the most dangerous misunderstood ideas of the early pandemic because at least in my state of Georgia and many other places there was this initial peak where it started to go down a little bit and as I showed you yesterday some of these models just assumed what goes up must go down and therefore this must be an indication that rather than this signal of susceptibles we must have actually dropped our susceptibles enormously how else would it go up and down that's what we expect from every other models that I've showed you already if that were the case there's even another thing that happens if you think that the indications of a little bit of a drop are the signs that we've reached herd immunity but yeah we've only documented a few cases so this was happening in late April 2020 in Georgia where and many other places where the number of cases relatively speaking was very low we weren't testing, we didn't know yes there were fatalities quite a lot in some places but if you think that's the case that a decline in a classic model is through susceptible depletion so if we've only measured a half percent or one percent of the population being infected and you see a decline and you think it's herd immunity that means that we must be under ascertaining cases by a factor of 50 or more maybe 50 to 75 or even close to 100 if we're under ascertaining cases by that much it means that COVID is not much worse than the flu because the cold because the case fatality rate must be also divided by that ascertainment bias does everyone understand this sort of logic which was wrong which is wrong and it was wrong because people misunderstood the fact that diseases need not have a single peak as soon as you go away from these classic models in which you only characterize everything by one number or not if that's all you have there's no behavior, there's no awareness, there's no feedback then you would expect this to happen and that means this is happening because we've depleted the population whereas if there's feedback you can get this long plateau or shoulder and I'll just point out I can't do it in this class without either doing a simulation or other means and I'm not going to turn on the computer to do it imagine of course that this category doesn't just end up directly in this category there's actually an exposed period there's a hospitalization period and this can last typically three or four weeks behind when someone is infected to where ultimately then we start to see the signal and fatalities because of that this awareness is in some sense lagged there's a delay in the system and because there's a delay it turns out I think it's okay to erase this, you get the idea I can just put up here because there's a delay in the system if you introduce delays you can actually get something like that where you end up near if I think about now this as the deaths per day or per week and here's my critical awareness value here's time to get oscillations in these models so you get a first peak and you get a second peak and this is not because necessarily that we've depleted the susceptibles and the whole new group of susceptibles here we've just changed our behavior and this has happened again and again where things get better so people relax as they relax but it's delayed information then things start to get worse again then we start to implement measures and I can't say necessarily that's gonna keep happening vis-a-vis external forces because politically there's another aspect there and that's beyond the scope but you can see that at some point it stops becoming a public health decision but from an individual behavior perspective it is possible to get plateaus shoulders and even oscillations using this kind of mechanism and now I wanna go back to your question about the mechanisms that can generate similar qualitative effects this is a new kind of feature that we did not see before in the other models I can generate I'm gonna show you one other way to generate it imagine I have a model in which I have and I'm not even gonna write equations but I'm gonna write some susceptibility distribution and I'm going to have two processes one, sculpting and you know what I mean by sculpting now it's a force of infection that is proportional to beta i epsilon so each one of these categories gets hit by beta i epsilon i, beta i is shared, epsilon is particular that moves things in this direction we didn't get this early plateau because of that because we just slowed everything down but we only have one way to go we could also add stochasticity in behavior meaning people might change slightly when you change slightly what you're doing what you end up doing is going in both directions and down you flatten the distribution because we have some diffusion you can imagine diffusion operator but what diffusion is doing is this pulls you this way stochasticity can resupply that people start to randomly revert you could also imagine some reversion or fatigue so anything that drives you that way stochasticity or fatigue will resupply this higher epsilon and you could get you could also get a plateau so I just want to point out that there are new features here and if we look at the levels for example in my state in April 2020 our team built models of these kind of spread in our state and we cautioned that we could be setting ourselves up for a long plateau and that people shouldn't as they see a peak come and pass assume that the danger is passed in the south south east we have tornadoes you probably have only seen or heard of them on Wizard of Oz if you've ever watched that movie they do actually go through the middle of cities it's not just in cornfield you have a notion of risk that comes in some sense what this model is saying is that our behavior can reduce the risk so it passes but if we change we pull the risk back towards us and that can happen at levels that are far far lower cumulative infections than we would have expected the susceptibles effectively speaking are almost everyone remains immunologically naive that was true almost everywhere for the first peak that was happening at levels in which the vast majority still remain we did not yet deplete the susceptibles we had not yet reached herd immunity in any classical sense but we found peaks and plateaus and then it ended up on the backside going down much more slowly than it had increased on the front side ok any questions is in my view that's what I wanted to say about that this what sorry behavior did it come out from taking into account the death rate or the reason why it came out is because what happens is that this force of infection is decreased by the intensity of these bad outcomes now you could say you could have used a hospitalization model and replaced that some notion of sorry or you could even use cases and make our attention and make the death component explicit because that is really what has been driving our changes it's the fact that there has been so many fatalities that has drawn this much attention and change if that current value so it is an ocean of short term awareness last time I talked about long term this is a short term but delayed awareness that is the mechanism that is decreasing leading to the plateau ok should this impact the decision making of the alpha authorities taking measures and so influencing their behaviors I mean not taking them we see that we will have a lot of infections and death but if we take them we are going to have a long plateau and so we should keep them for a longer time and in this model we are now taking into account the cost of these behaviors and the measures that can cause more death because of economic crisis and delayed diagnosis in the hospitals and delayed surgeries it's another great question now I only have a limited amount of time last summer we tried to build a model which included socioeconomic costs and public health costs and say what should people do if we care about both so often it's presented here I'm presenting just one side but you're right it's not necessarily it's often presented as a false dichotomy either we care about one or we care about the other it's very hard to operate your business if people are sick and worried about infection how does one reconcile it in this model at least what we tried to ask to formally deal with this thing is say what happens if we don't think everyone should have the same behavior what happens if our policy change with your disease status and so in March or April 2020 our group said well there are many people who have moved into this recovered category which at least in the short term and now we recognize that the duration of that was unknown but we had a notion that it was at least 6 months out that it's possible that those individuals could start to actually do more and dilute interactions between potentially risky S and I individuals we proposed something called shield immunity and we talked about immunity visas rather than immunity passports because I view them as a short term duration rather than as a long term duration and when we tried to formalize this we tried to ask question balanced minimizing this while also noting that there's a cost if we don't have any interactions and we asked an optimal control algorithm in some sense to say what should we do what it said is that people who are infected should isolate people who are recovered should go out people who are susceptible should do things depending on the state of the system when we proposed that there was some pushback how could you possibly differentiate between people why do those people get to go out and you don't two years later I mean we're operating with green passes all the time and those are meant I think in some sense to say that we are trying I'm not saying it's optimal and I'm not going to claim that these passes are optimal there's some concession that we can't all do the same thing and that risks differ and that we have to balance both that itself is a whole other lecture so I'll have to leave it at that and I can share a paper where we talked about those issues but again those stop becoming just scientific issues and the scientists although we may propose ideas in the public health folks then I can assure you we don't have the first seat at the table in those kind of discussions thank you ok good I have about 20 minutes left and so I only have about as much material as I've done left to tell you so let me skip some of that material and try to focus on some of the highlights I'm gonna erase all this now and do one more thing ok as I'd like to teach you one new method and one new concept that's relevant and we can then tomorrow wrap it up good so I wanna go back to something that I started with in which are these speed strength relationships there's been a theme I've been returning to which is we measure the speed we infer the strength so this is just the I measure the cases and the exponential rate of case increase I infer the strength this basic reproduction number and I predict the size right and what I've gone to today is the size in which depending on what my model is heterogeneity and behavior I often am doing these two of the same moves but I have very different predictions here ok and just to remind everyone that there's this relationship keep in mind that if I have i dot is equal to beta s i minus gamma i which means initially this is just like gamma times beta over gamma minus one i which means that this is r naught minus one over t i times i this is the speed which means that there's a speed strength relationship r equals and I'll put the maybe the gamma back again actually no I'm gonna do it this way r naught minus one over t i I can think about it this way or I can think about it as that r naught is one plus r times t i I measure this I infer a strength you can see immediately why in my first lecture I said that given conditional upon the same data you measured r if the generation intervals are longer that you infect others goes up then my strength goes up so I see the same data if in fact the disease has a longer generation interval it has a higher r naught therefore potentially can infect more people over the long term ok so we establish that relationship in this simple model but what do I do when I have an s e i r model or even an s e i r d model and I mean the d not in the sense of the awareness model but in things like Ebola where there's actually a very strong post death transmission route which you may not be aware of in other words people who are dead at burial ceremonies is one of the major drivers of Ebola spread because you can still be infected from the individual from touching or handling a body which is in many cultures part of death rituals sacred barrel rituals this is a different model it gets complicated what about if I were to take one of the simplest models for SARS-CoV-2 and maybe I'll make it a little bit more complicated we get exposed summer asymptomatic, summer symptomatic asymptomatic recover the symptomatic mostly recover this is going to get very annoying for us to sort of unpack each one of those ok, yes question even in the simple SIR model when we want to estimate at zero from r we have to know t i that's what I'm saying so what I've said at the very beginning that's why you can't actually do this unless you know this which is why there was that identifiability problem that I showed you on day one where you had the same r but very different r nots and you can't figure it out unless you do some other epidemiological homework either because you know something about the disease or because you go back and trace individual outbreaks and figure out this person was infected B by A when did B come in contact with A and the only way to figure that out is through some very detailed work you could do fitting but you have a big identifiability problem so yes, for emerging infectious disease if you don't know this you can't make the inference absolutely ok so you can see we could get into all sorts of trouble but I want to introduce one last methodological approach which is rather than write with this explicit framework what I'm going to write is s dot is equal to minus little i of t i dot capital i is little i dot of t minus gamma i r dot is gamma i and I'm going to call little i the incidence it's the new infection rate capital i I'm going to call the prevalence how many people are infected ok and I'm also going to make a further assumption that initially I'm going to assume that both little i and big i are going to be growing exponentially at some speed because this is in this we've already shown there's a sensibility and we expect both are going to grow but I want to figure out what does this little incidence depend on the incidence now is going to depend on prior infections so I'm going to look backwards in time and ask how many individuals were infected a ago which is just the incidence t minus a and multiply this by the number of new infections per time for disease, for someone who's infected a ago and we get this recurrence relationship that the incidence now depends on incidence in the past so this is the number of new infections caused by disease that was started infected was started a period ago now I could break this down and break this into l of a and m of a this is the probability still infectious at age a of the infection and this is the rate of infection at age a right so do I get to that age, have I not yet recovered how many people do I infect per unit of time given them that I'm that age okay on both sides the reason because the new incidence now depends upon the fact that there are people infected a little bit of go who are causing new infections some people affected a long time ago causing new infections and we have to add up all of those cohorts which is what that integral says I have no, I'm sorry to understand there's no negative sign, there's I of t minus a I'm going to make this infinite assumption of course there's a time limit because we had the takeoff in standard, you'll see in the renewal equation framework we're imagining in this limit of having an infinitesimal initial start so I can go back and yes technically it would have a cutoff but it's not going to matter that much we're going to multiply something by zero at that cutoff anyway so bear with me and accept that I can take this integral otherwise a high number because you're right technically we have a finite time where things took off mathematically if I have an initially very small value then it turns out anyway when we get to these big a's this is going to go to zero and it's going to knock it out ok so I think though initially we have exponentials which means e to the r t which means 1 equals integral from zero to infinity da e to the minus r a n of a again I don't know what this is yet I don't know how many new individuals are typically infected by an infected individual whose age of infection is a but if I did and if I measured this r this should work now recall that the definition of r naught is the integral from zero to infinity da of n of a the average number of new infections caused by single infections individual in otherizable populations this is the number of new infections caused by infection of a j if I integrate from zero to infinity I get the total number of infections which also means that we could use a normalized distribution all I'm doing is finding now the probability that an infected individual infects someone when they're a j this is the generation interval distribution that I keep talking about this is just the number of people you infect typically in a j normalized by the total if I c in n a I should just replace it by ga times whatever that number is but that numbers are not which means I end up getting ga times r naught which I can move and I get one over this over here which means that if I replace that renewal equation ga is one over r naught equals integral from zero to infinity da e to the minus r a times g of a this is simply a moment generating function right? if I have r is zero I get one if I take the derivative of this with respect to r at r is zero I can get the average age of my infections and I can do that for all so I can generate all the moments what you can see is this m if I call this moment generation function m of negative r where again I'm saying that m of z is equal to integral zero to a da e and let me just make sure I got it right good z a g of a so I have the moment generating function and if I knew this thing I could just calculate m of negative r and I would infer my r naught this is the new version of these speed strength relationships so let's put this into action does everyone understand this mathematical formalism? I have this renewal equation and from it I can see that if this were known in advance but if you knew something about this shape already you could observe the speed and figure out the strength I think we're OK and I'm also kind of building towards a near conclusion here so let's put this into action in the case let's go back to my example of this single category I model where I have an exponential rate I have my constant rate of recovery which means I have an exponential period of staying there so for example I might say what's the probability I make it to a j that should just be e to the minus gamma a don't make me do the integral of an exponential it's that which means that a zero I definitely am there, a infinity I'm definitely not there I don't worry so much about that top integral to infinity because I'm hitting this by an exponential which is declining but remember the rate of infection given that a is just beta which means that if I wanted to look at what n of a is I would get beta e to the minus gamma a which means then if I take the integral just to make sure that we're not crazy beta e to the minus a from zero to infinity you can see this is just going to pop out a beta over gamma in fact we recover our r naught this is the right definition if I then were to take let me keep my renewal equation up there if I were then to take this kind of moment generating function approach and ask the question what is the integral from zero to infinity da e to the minus r a we don't have to normalize n of a is that r naught is that which means gamma e to the minus gamma a that is my generation interval distribution it's exponential which means 1 over r naught is that which means this you can just see should be something like gamma over gamma plus r which means that over gamma equals r naught which means r naught is 1 plus r over gamma r times t i miracle, I have very little time and I didn't mess up wherever I am in my notes, who knows who knows where the notes are ok we've used the same formalism it turns out that there's a massive interest in figuring what the heck these values are very early on for example we knew that there was this asymptomatic route if a portion of the new transmissions were going through the asymptomatic route and they had a different average period of infection this could mean that the disease would have a very different outcome in terms of the strength why if you're infected and you have symptoms you probably aren't attending this class but I hope at a certain point you heard people cough but at a certain point you didn't hear people cough however if you felt fine not only it's possible that you might have stayed elevated might have stayed higher longer for whatever reason maybe you had a lingering I feel fine but certainly the rate of infection could have been higher effectively because you have a behavioral impact so the other point I want to make here is not something just associated with the etiology of the disease in the absence of treatment probably this is more or less the same for the same individual over time this is not this is being modulated by behavior which means the generation interval is not something constant which means effectively speaking this itself is dynamic but even at the start it's possible that an asymptomatic route could actually lead to a higher value because more of the disease we're running through that route which has a higher TI now I haven't explained I won't have time what do you do in practice this is a very mathematically elegant thing and if you read this paper which I'll put out by Walinga and Lipsich they calculate all these integrals for you and this is used in demographics as well for age structure populations the problem with those calculations is it seems you need to know a lot about the distribution and if you don't know precisely what it is how do you do this it turns out with some colleagues we said let's just treat this as a gamma distribution with a mean and a variance and if you do that you can more or less approximate mostly the generation intervals and what you need to know is the mean and the variance so if you can at least get data on mean and variance rather than the precise distribution you can actually do these kind of calculations very early by replacing the true unknown g of a with an approximate thing that you take from data there's some other features here one of which is that if you have two different routes drew something here you can actually add up these things because you can see that if I have independent routes then I'm just adding up these generation interval distributions from the two parts but I don't might not know how to weight them so there becomes yet another identifiable problem so when the field as a whole observes an R and uses this method or even these direct methods they are making often assumptions about generation intervals or the routes by which the transmission takes and the answer turns out depends strongly on those assumptions so I thought there was one question that I wanted to wrap up here in the front I saw was there a question another question in my view I more or less explained that now so I just wanted to wrap things up because we're about to take a picture let me just give us one or two last comments tomorrow morning by tomorrow morning please deposit using the Dropbox follow request link your notebooks this week I've tried to introduce you the motivation behind why we should even learn any of this mathematics and some of these non-linear dynamics and I know as a spring college on the physical complex systems I've tried to emphasize physical intuition and the biological mechanisms throughout and really use the non-linear dynamics toolbox to explain things concepts of speed and strength and herd immunity and final sizes a lot of which for some of you may be new some of you may have been old but I hope by yesterday through the heterogeneity through this behavior and through these generation intervals you have a broader view of what goes on and what goes in in terms of not only driving dynamics but what we do in terms of inferring things that we might want to know and my last thought here is inferring the strength doesn't mean being passive often the reason why people do this is they want to reduce R0 to be less than 1 if they can figure out the contributions to it through different routes which are often using these methods to do so they have an effort of control so I just want to make sure I'm not leaving you with this maybe even a bit of sad idea that somehow we measure observable and then we look at the inevitability but rather these models are also helping you figure out what are the contributions to R0 so that you can take action hopefully in a more concerted and purposeful way thank you, I'll see you tomorrow online and we have a picture so thank you very much