 Yes. Yes. Okay. Please go ahead. It's okay. Okay. Good morning, everyone. Got it. My computer isn't working today. So I swapped the slides over to the PC, which sometimes, you know, they're Mac PC issues. So I haven't looked at what's happened. So some of the slides will look differently and we will just see how that goes. So this week I am switching gears. So last week I gave a series of lectures on viral ecology and I promised this week I'll do a series of lectures on viral epidemiology and epidemics. So my plan, as I will explain in a few minutes is to give you a broad overview this morning. And then tomorrow and Wednesday get into more of the mathematical foundations of some of the concepts that I introduced today. So at various points today, I might go a little bit quickly through a concept or even just give you the gestalt, the essence of the idea. So on Thursday I'll fill in some of the foundations and Thursday you'll get a chance to do a number of these things for yourselves. This is going to be a bit of an unconventional lecture in the sense that rather than trying to start from perceived basic foundations that we all agree upon, I'm going to try to illustrate someone has a laptop that is doing feedback. So I'm going to try to illustrate what happens when you take epidemic principles and put them into practice. So showing you some of those principles, but also showing you some of the fragility of them and try to address some of those throughout the course of the lecture. This image is a collaboration. I like starting with this in this context. It's a collaboration between the undergraduate Georgia Tech and the scientists in our group trying to depict some of the intent of what we tried to do, which is this is coming from a risk map that I'll tell you more about today that tries to express the odds that one or more individuals might be infected in groups of different sizes and tries to do this in real time and communicate and also trying to keep in mind why we're doing it, which I think you can understand here from the image itself. So I can already tell all the fonts will be different and unexpected and who knows and they won't be space. So if you notice all the fonts look strange and it doesn't look well placed. I will release a PDF in which the font so they write fonts and they look like they're in the right position. Okay, so I'm going to talk today as an introduction about modeling interventions and still this ongoing need. There's a lot of room and opportunity for people who are sharp or quantitative, but who also want to commit to understanding how epidemic to work to get involved. I will in some sense build up to three different applications. I'm not going to jump there immediately, but I'll build up these different applications of what we have tried to do. And there's a long list of folks who have supported this work. And I think it's important, especially in this context to highlight them. I'll try to mention them as I go. So I was supposed to be here a while back. We had met in Brazil at some point and was in 2019 or maybe it was January 2019. And I was supposed to come earlier. Obviously we all know that a lot of these meetings were not possible. The fall of that same year a little bit before I think I was going to come out in 2020. But originally that was the plan. We're going to sort it out. I teach a quantitative biosciences class. You saw a little bit of it last week in terms of some of the modalities of how we teach using lectures and laboratories. And one of our. Sections is on epidemics. And I think you can probably see this, but if not, I'll just read it. One of the Homer problems took a conventional epidemic model and then just ask what happens. If we have a new epidemic. We have a newborn transmitted disease like SARS in which individuals usually counting 50 people. There's a sort of average incubation duration. And then I go in to ask what happens if people start to wear masks, which reduce the spread to zero. However, compliance scales with disease in its students, such that the fraction of individual wearing masks, et cetera, et cetera. So you can see, you know, this is unfortunate. I try not to write catastrophic problem sets and then I go in to ask what happens if people start to wear masks. I don't know if those things happen into the future, but groups like ours were already thinking about many of these same issues. What happens when you have disease outbreaks and you have behavior. And what happens if the behavior is not perfect. Right. And so there were some ways in which our group and others were already thinking about some of these things. And we even asked the question, how would you design a public health policy. Surrounding mask wearing. What level of compliance would you need to have. We're all sitting in this room wearing masks. I think in France, probably as of today, I think they're going to stop certain requirements. I don't know if in Italy, the same thing is happening. Okay. There's some shaking of head. So I'm not sure in France, there's an election. Maybe you don't have an election coming up. So some of these things are not just driven by public health are driven by other issues as well. Okay. So I'm going to go back in time and recall, and I'll try to draw us through an arc that begins in 2020. Some, some sense of what we understood early on, in terms of two key features as to why the world was alarmed is alarmed and still remains uncertain about how to address SARS-CoV-2 COVID-19. One of the reasons is that there is a, there was a sense for early that this was quite transmissible. It had the potential on the X axis is the basic reproductive number, which I will explain throughout the course of talk, which talks about the average number of new infections from a single infectious individual and otherwise susceptible population. And there was a sense, you know, it was about two or three or four, something around in that range. And then the other parts, that means it could reach a lot of people. The other part that was concerning was the Y axis, which is either the case fatality rate or the infection fatality rate. And there was more uncertainty at the beginning of how bad it was with SARS and MERS and things like Ebola above, which are highly lethal in terms of the effects they have on individuals. This one had a bit more range and it was clear it wasn't anything like the seasonal coronaviruses, but nonetheless there was some uncertainty. And I bring up these two terms infection fatality rate and case fatality rate to distinguish the fact that the case fatality is what we measure. We count cases, count fatalities, divide one by the other. The problem is that the denominator, even the numerator in certain countries is hard to estimate, but certainly the denominator is very hard. We're not always sure if we identified all the cases. And so some of the uncertainty there was, well, perhaps we were only capturing only one out of every two cases or one out of every five or one out of every 10. And some people speculated we were only capturing one out of every 50 or 100, in which case it wasn't that much different than the flu or even common cold. Obviously those were not true. I knew very early those were not true. Nonetheless, there's an issue there between what we observe and what we'd like to know. We can't know directly the X axis and we can't even know directly the Y axis when we don't ascertain all the cases. And so in the first year, there were very few, if almost no ways to intervene, pharmaceutical. There were non-pharmaceutical interventions. And the last year there were vaccinations and still very few effective pharmaceuticals, treatments, at least not widely available. And unfortunately this is a problem that's going to persist in terms of future years. How do we face variance waves? How do we test, treat and how can we make this theory useful? And I'm sorry that things are being cut off out of my control by the fonts and Mac to Windows conversion, but hopefully I'll be able to communicate what you can't necessarily see on the slides. Okay, so we can make certain kinds of choices to think about how to model epidemics and then again how to make it useful. Obviously we could start simple. And for the most part I will start simple, but I'll build in complexity and I'll reinforce Tuesday and Wednesday. We could take a population and divide it up into different categories. For example, those who are infected, those who are susceptible and so on. And forget about all the ways that people live in existent space. And of course we could also change that and address the fact that proximity matters. We're not all interacting with all like particles in a heat path, right? Or veteran viruses and a chemist that. And of course we could think about it in terms of continuous space and really think about the waves and way in which things move across and even think about networks and so on. And each one of these begins to imply a certain kind of mathematical formalism and methodology that we might use things like very differential equations, network models, maybe even PDEs to describe interactions and dynamics. And it's true that the latter two are more realistic, but from an intuition side actually you can get quite far using these ODEs. On the other hand you have to be very cautious when you try to apply them because of many of the assumptions and simplifications you use just to make some intuition also mean that if you try to apply them in practice this may be fraught. Okay. So let me introduce the simplest version of one of these kind of model classes and it contains things that I'm sure many of you if not almost all of you have already heard or thought about but it also contains some nuances that I expect that some of you maybe most of you nearly all of you have not thought about. So there's still stuff to learn from this very basic model. So let me first define the way that we're going to set up the model. We have a population we're going to think of it as we divide it up into fractions. You can think of these as densities or rescale by the total population size that we have fractions of the population that are susceptible, infectious or recovered and or removed but we'll think of it as recovered for now. Infectious individuals come into contact with susceptible individuals at some rate beta there's a transmission event and the infectious individuals recover over a time period characteristic of the period TI. So we could take some of the same methodologies that I talked about last week and write down a couple of set of nonlinear differential equations and here I've kept the end but you can rewrite everything in terms of fractions where we have the derivative here S is always going down due to infections. Infectious individuals increase because of new infections. There's a recovery period and we're moving those infectious individuals into that recovered class. So it's fairly straightforward in terms of the setup and if we add these up then we see that we've preserved the number we're not worrying necessarily at the moment about fatalities or births that would put in new individuals into the susceptible category and so we're ignoring demographics for now. So we're sort of on a short epidemic time scale. If we take such a model and then try to evaluate what happens we see that we get these characteristic outbreaks in which the number of susceptible people go down, the infectious go up and then down and the recovered also goes up. And I'll just anticipate sort of how we're going. It looks a little tricky here on this y-axis but in this I guess for any realistic epidemic using this model framework who here thinks that everyone eventually gets infected. There are no more susceptible people left. So raise your hand if you think everyone gets infected. Okay, about now I'm getting some more about half who here thinks not everyone gets infected. Okay, we're about split. Obviously in this example it's set up in a kind of funky way so it looks like it asymptotically approaches one. I'll actually get into that in a little bit it turns out that at least from the theoretical side not everyone is expected to be infected. In fact, from these models it predicts that there's always some fraction no matter how high our nod is of individuals who are not infected. So it doesn't actually reach everyone which I'll explain there's some other issues that's called a final size relationship although obviously in this diagram it looks like it goes up towards one. But let's think not about the end but about the beginning and in thinking about the beginning we should ask the question just like we asked in the viral ecology section just because you have a virus can affect a host as it means that it spreads at the population level just because we have an emerging infectious disease diseases are always popping out at some low level from zoonotic reservoirs doesn't mean all of them end up becoming pandemic obviously very few at this scale. At the beginning we can ask the question what would happen when nearly everyone is susceptible because we're nearly all immunologically naive and unfortunately that is more or less what was happening in January, February, March 2020 and in doing so we can rewrite this DIDT equation replacing s with something approaching one s over n with approaching one so that we pull out an i and we see if we pull out a one over ti we get beta ti minus one over ti after multiplying the small number of infectious individuals so this is nothing other than making an assumption or linearizing the system around the disease free equilibrium so if we had taken the same procedure and done the complicated routine of finding the fixed points and linearizing we would find out that our infected subsystem has this characteristic dynamics which means that whether or not it takes off depends on whether or not that thing is greater than zero which really means if that part is greater than one if the product of the rate of infection times the period of infection is more than one that implies that that infectious individual causes more than one other infectious individual in other words generates more than one other infectious individual during their infectious period during their lifetime if that's true for that one individual it's true for the next two or three and it's true also for them and you can see how this leads to exponential takeoff we identify that red box number as or not the number of infections caused by a single infectious individual and otherwise susceptible population it's a dimensionless number the speed of takeoff is not so we can think of this as a threshold criteria we can think of the speed as akin to the eigenvalue so one has a rate and one is a criteria that turns out to be a dimensionless number any questions about this okay so and I should also ask I assume many of you have seen these concepts before plus I talked about them last week so I reinforced any questions before I move on to a puzzle yes correct correct yeah the box thing I should have probably should have made the boxes on the right I can't tell what's having with the colors they're the same thing but the difference of beta times ti is or not and we'll go through and more complex models later in the week exactly how to define it when it's not this simple case okay so now many of you have seen this before here I've constructed a synthetic example where I've taken a look at the number of infectious individuals over time as a function of days and some synthetic outbreak that I've generated on the computer and I've set or not a 1.5 I've made that product of beta times ti and I've set it to 1.5 and I get this kind of dynamic it takes off now I went to the computer and I changed r0 to 2 and then I changed it to 2.5 as you will see and I added some noise to imagine that the actual number changed from day to day the actual number of infectious individuals change beyond that expectation from a deterministic model and in fact these curves don't actually include as much stochasticity as would be the case if we had such small numbers that's an aside but I've just added some noise not process noise just some noise on top and you'll notice that they look identical I think most people have this idea that r0 as it goes up should make a difference in terms of these curves does anyone have any idea what happened here why is this possible why is this mistake I certainly can multiply numbers together how was I able to get three identical curves with very different values of r0 I mean these are dramatically different in terms of their impact potential someone from the class here who's just waking up there's not enough coffee drunk this morning I can tell people studying for the afternoon exams how is it possible how can I get totally different outcomes yes so the possibility is maybe we're starting with a very low number and the dynamics have looked the same for a while and then maybe we'll diverge is that a hypothesis anyone else have a speculation yes on the side since we expanded around us almost equal to one maybe this is the the condition for which the infection starts but then why would we expect the results for later so we expanded we found some disease for equilibrium so perhaps that they're going to all seems like a similar hypothesis maybe I misunderstood are you wondering if they all started somewhere near the same but I have these parameters no I don't understand your hypothesis I was trying to repeat it but I don't understand I mean we said that we defined that in that way but we as is almost equal to one so why would we can we say anything about what happens later interesting so we've made some assumption I get it it's only linearization but now we're in some nonlinear mode so why do we even get to speculate so these numbers as it turns out are still so small that our linearization is probably appropriate as you'll see in a moment so we're still in the linear regime we have different takeoff rates that I've done and what I've constructed which is actually there is a divergence later but it's not because they always need to be the same I built this intentionally to show that there's an identifiability problem at the beginning of outbreaks which is that you can get the same initial growth rate for very different underlying reasons that only later will be revealed which is not due to some linearization nonlinearization issue but it's due to a fundamental identifiability problem if you just measure the speed you don't actually know the R naught and why well it's the product of two numbers and so obviously when I did this I played a little game and changed something so that the speed which is not that product you weren't looking carefully at that a moment ago but it was R naught minus one divided by T I so I am able to keep the same while changing the R naught I have two things that I can change at once I'll explain more intuition in a second yes I'll tell you a question it's suggested in the charter maybe because we are assuming the population is fully connected in the simulation that is certainly the case in these models this morning they're all connected and so it's certainly true that if we were to look at structural differences in the population we'd have different kinds of outcomes like network theory and not just the sort of well mixed theory but what I'm talking about here has a simpler notion when we get to populations let me just go back here for a moment and remind you that before I made the box narrow this in some sense is the speed that I observe but it's the product of transmissibility and the infectious period and it's possible to keep this constant while actually changing that product and I'll now try to give you some intuition for that and I'm moving the wrong computer ahead that one's not on so let me just explain these concepts as I went through it which is this is the speed this is what we were measuring early on in early 2020 how fast were things going up I use different strengths and it's often what theorists and epidemic modelers want to find out and the reason why they often want to find it out is that the thing that is a stronger disease even though the observable may be the same later on the total size in terms of the peak the duration the total number in fact is related to the strength so the idea is you have this observable the speed from that if you knew certain things you could infer the strength and from the strength you could make predictions about the size and obviously if you knew some of your infections fatalities you would even say something about severity so you can see why there's this preoccupation even though the public is often just insuring this I mean what do they care about this unobservable that's sitting below this obviously matters and in fact thinking about speed rather than strength is often I think probably something that's not done as well or not is focused on for various reasons but this gives you this idea that just because you observe something there could be different underlying reasons and those reasons then we'll have consequences later as we move outside of this linearized version okay so many values of our not can be compatible with the same observable rate even if the outbreak sizes are different so we have this notion of observing the speed inferring the strength predicting the size okay this link not only is conceptually interesting and actually I think a bit challenging not as well understood not even in some fact sectors of the public health world but you can begin to see if I look at two comparisons and I'm going to use the word generation interval which I will unpack more this week more formally right so I'm doing some things today in which I'm going to tell you maybe unsatisfactorily a bit about something but I need to do that because otherwise it won't even make sense why I'm investing the time in the mathematics later in the week so that you can think about the generation interval as the period between which an infector infects others right and I will unpack even more of what that means later in our case we can think of it as just simply the value of T I for now because that's the average time it takes for me to infect others and if I have the same observables here this is in terms of incidents rather than prevalence incidence is the number of new cases weekly prevalence is the total number of individuals for infected you see these are the same curves if the generation intervals are shorter in other words if the average infection period is smaller then the reproduction number you infer is smaller whereas if the generation intervals are longer the reproduction number you infer is larger you can see this intuitively with this diagram you can see that here on the your right we have longer generation intervals which means that over this period we get the same number of final cases but we only went through two generations which means the average number of infectious per infectious period must be bigger whereas the one on the left we have faster generations it's kind of burning through this more quickly and on average the number of infections per infectious period is going to be less so are not as small now if you're a member of the public I don't think you see this many people infected by the end that seems equally bad in the long run it turns out this may have a different kind of effect but in the near term these things are taking off at the same speed okay so this is implicit and says that if you want to figure out something about how strong the disease is you make your observation and you need to do something about the generation interval how long the periods are and how transmissible they are you need to know how transmissible these things are and so people try to have a sense of well how long are you infected for what is the typical period when you're typically infecting others it turns out we're not always even sure about that especially for emerging infectious diseases so this explains how early on when you see certain headlines and I admit that my headlines will be drawn mostly from the United States not entirely you can understand things like this which was April 7 COVID-19 maybe twice as contagious as previously thought because we have been working on generation intervals for quite some time in building methodologies as soon as I saw this kind of title and someone is hiding a video panel I'm not doing it it's not me so I'm here not touching the mouse as soon as one sees a title like this colleagues decided and looked into it a little bit more to me that sounded like there wasn't a new data out it sounded to me like someone had made a different assumption about intervals so if they think that the intervals are longer you immediately can get a higher or not and that's what this title means so this title means if you make a different assumption about generation intervals then you infer from the same data a very different value of are not and that's what they actually did so you see these are not these distributions and you can see the patterns in terms of given a given growth rate if you vertically move up and you see the are not using their different simulations you can see that as the interval and now they're using a serial interval which is just the time between when my symptoms show up and the symptoms of the person I infect show up so if you make a different assumption interval but for reasons I'll explain later you see that as these timings go up so does they are for the same condition upon the same observable so it really becomes an issue that people are dealing with assumptions these assumptions matter and now okay good so we went into that and tried to reconcile this in the jrsi paper very early on you'll notice the receipt date it was February it was one of the first things we were doing trying to understand how we could even estimate are not and what we realized is that the reason that people differed on their estimates of are not was largely because they had very different ideas of the generation interval and even things like dispersion whether it was exactly at a certain time or had variability because all of this is going to make an impact the more variability you have the more early transmission you have that sort of moves things to the left and there was even differences in terms of estimates of the growth rate but for the most part that was the same and with these other things that differed speed and strength are linked this is also relevant for new variants and you can even do it in the background of old variants so these same concepts of trying to figure out are the new variants have the same generation intervals as the old variants it's not just about transmissibility it's also about how long things are transmissing transmitting excuse me okay so I'll just point out that when you incorporate all this variability even though there's much larger point estimates what we realize is that they were really just making different assumptions about generation intervals and we ended up with a conclusion that are not was about three plus or minus one to 1.5 ish and I will talk about this think on Wednesday I will unpack exactly how used generation intervals in a more formal way I'm giving you the output of what happens when you do okay I started with this basic model showed you how we can in this very simplified sense mathematically this is not very complicated to show that there's this takeoff speed I showed you speed and strength are linked and then I showed you empirically the simulation that yes hey there's could be this issue at the final size so that things that look the same early diverge later okay this is what final size relationships tend to look like all of these are starting with a disease for equilibrium near that susceptible fraction is one this is the IS plane it can't ever go into that upper right section because S plus I plus R must be less than or equal to one and so I know that S plus I must be less than or equal to one so I can't go over there what can happen though is the disease takes off more and more affected people depleting susceptibles and we fall back down you'll notice that we land on this axis the disease is done but the susceptible fraction is not zero if everyone had been infected all of these curves and their respective over are not would all be attracted to this point but they're not in fact they're attracted to a whole continuum of fixed points there's an entire line of fixed points down here not to say they're all stable but there's actually a continuous line of stable fixed points and can do my unstable fixed points and they're separated by a certain boundary point but you can see they all move down you can see that is the shading of this line gets a little darker implying are not gets higher is moves farther and farther to the left okay this is from a simulation but it turns out you can do this yourselves I don't know if anyone wants to try I feel like today people need to try to do something so I think you're too worried about your exam later today you didn't sleep enough okay let me do it the other way s dot is minus beta s I and I dot is beta s I minus gamma I you can see that I can take that ratio and get something like that okay so why don't you take two minutes and see that I've given you a step forward you can maybe see where I'm going but I suggest that you could write the relationship between every change in I small change in I with respect to a small change in s and you might be able to integrate that and find something so understand what I'm trying to get you to do so just take a moment and see if you can figure out if I started over there I had an or not where I would end up so there's a simulation could you have actually predicted where these final points are so I understand my question so we're not understanding what the heck I'm talking about which is okay so professor do we have to write the small variation of I in function of a small variation of s who is asking that question from online sorry okay fine I looked around the room I didn't see anyone who had a secret mic so repeat your question now that I see that I shouldn't be looking at people in the room can that question be repeated to understand we have to write a small variation of I in function of a small variation of s correct okay okay thank you correct so why don't you try to do that I've given you the hint the hint is on there and just take two minutes or so to try to write it out if we do this I don't have to do it on Tuesday or Wednesday anyway which is fine remember we've gotten into this mode because we found these parameters link strength and speed and we care about doing that because we think that strength relates to the final size so if we can actually figure out strength early we might be able to warm people earlier and know how to prioritize interventions just take two minutes if you want to try to do it maybe just another 60 seconds just so you kind of go through some of the processes here and you see that it's actually possible if you do it you're going to probably have a problem you need some initial condition so I will give you the hint here's already the hint but initially keep in mind that s is basically 1 and i is basically 0 but you seem to want to know a final condition right because that's what you're trying to figure out keep in mind the other hint is there at the final moment the thing that you want to know s infinity I'm trying to ask you to figure it out but what is i infinity at the end it's the beauty of mac to windows conversion so the problem still doesn't work I will try to fix my laptop tomorrow so I can actually tomorrow I'm doing on the board so it won't matter I will post the slides without all these funny things going on another 30 seconds and I want to keep moving ahead here just keeping mindful of the time right so we know the initial conditions we're trying to figure out the final value but you know something else which is going to help you anchor you I know people are still doing it but I'm going to move a little forward is that okay can I move forward okay so you can imagine you take the ratio here you move ds to the other side you integrate you probably got something that looked like this so in relation to i and s with a log initially s is 1 i is 0 so in this case you can figure out this integration constant and if you do that you can figure out the last bit which oh my goodness you can't see anything there you get something that looks like that they will get there okay well you get a relationship between s infinity and it's defined by our knot right so it is not a closed form solution but nonetheless you have this algebraic relationship that you can use to solve for any value of our knot you can get this final size but even before the final size I want to point out that something happens interesting first which is oh we don't have a wet cloth today which is that this becomes 0 right and it becomes 0 not just because i is 0 but it starts to go down right and before it can go down it must have 0 and this is true when s equals gamma over beta or when s is 1 over r0 which means when the susceptible population has been drawn down and remember we only have lift off when r0 is greater than 1 in theory the higher the strength is the population gets lower and lower before we start to see what happens in case counts so if you can imagine for a moment which I will unpack more here is time we have something like this for our infection and then we have something that is like this for our susceptible this point here where the infections start to decrease must coincide where the susceptible is 1 over r0 which means the infection is interrupted lower but I didn't put scales so it's okay which implies that the outbreak size at that point is 1 minus 1 over r0 that many people have been infected at the moment when we have herd immunity this level says that enough people are infected around me so that the disease would no longer necessarily infect 3 because maybe 2 thirds of the people already have been infected so you can imagine here's my focal individual here are 9 people nearby imagining that r0 is about 3 initially initially you would get 1, 2 and 3 on average and the disease would continue to take off now imagine later on where we have you can see here when s is only 1 third 1, 2, 3, 4, 5, 6, 7, 8, 9 these have already been infected so now if I choose 3 random people to infect then maybe on average I'll only get this one and here I'll waste and I'll waste the infection in some sense anthropomorphizing from the perspective of the virus but instead of having 3 new infections I will actually be down to 1 because the transmission is weighted by s over n or here just the s the fraction and now here's the case where s is only 1 over r0 or 1 third only 1 third of the folks were susceptible at this point this is the herd immunity threshold here is the initial yes why don't you include the recovery because it seems that it's important here the folks here they're either actively infected or they've recovered yes it's quite important the reason why I don't include this is the outbreak size maybe I see what your point is this means the total number of people who are no longer susceptible they are clearly now into the recovered class so if we look at the end the end result is s infinity i infinity is 0 and r infinity is 1 minus s infinity s plus i plus r is 1 so I don't need to explicitly keep track of it but when I get that end of the disease no one is infected here's our solution for the s infinity anyone else must be recovered the reason why we get herd immunity is because at that point the susceptible fraction is down to a third everyone else is either infected or recovered so very important it's actually the basis for it now if I were to draw the recovered fraction it's going to go up and up and up so I'm going to give this to people to this gap I want to make another point here that people talk about reaching herd immunity do you remember when people talk about reaching herd immunity early on I don't know what it's called in Italian but herd immunity you know what I mean the problem with that is there's something called overshoot even though you reach it to happen even after herd immunity is reached and even in this simplified model. Okay? So even that idea that somehow if we got to herd immunity, which itself was problematic and wrong for a million and one reasons, even in the simplified model has another level of problem, which it doesn't mean the end of the disease. It just means that in theory that infected person has an equal number of new infected people, but there are a lot of infected people to replace. So you still get what is called epidemic overshoot in quite a lot. Okay? Fine. I'm still in the introduction part, but I have three examples that I want to go through. It's about an hour. It'll be fine. I'm getting to that point. I started from the simple notion of speed, showing you that you could infer speed from strength if you knew something about generation intervals, which is why our group and others put a lot of investment in trying to estimate those and try to deal with the uncertainty. People care about strength because then it has to do with size. And then I go back to one of my first slides, which showed you the infection fatality rate. If you had a sense of size and you knew the infection fatality rate, you would make predictions like the following. From the Imperial College of London group, death per day per 100,000, March, June, U.S. predicting 2 million, Great Britain 500,000, various simulations. The reason why you get to these numbers really almost by back of the envelope calculations is that you get something like 60 to 80% infected in this simplified model. And if you take that disease with something which has an IFR, a little bit less than 1%, then you get these kind of numbers. There are many caveats. I will bring up some of the caveats, is that we're not particles in heat bath. There's all sorts of reasons why these models seem too simple, but they give a magnitude of concern. Now, many years ago, some of these same numbers were put out in terms of the Ebola virus disease outbreak, because it also had a sufficiently higher or not, and the infection fatality rate was much higher, something 50 to 70%, rather than something like less than 1%. Nonetheless, it didn't spread globally for all sorts of reasons, and maybe at the end we can talk about it, in part because if you got Ebola, it was a very bad disease, very high fatality rates, worked with folks in the CDC who traveled to West Africa, there was all these efforts just to isolate people rather than to treat. There were these Ebola treatment units, but the treatment was very limited. For SARS-CoV-2, many people will have mild or asymptomatic cases. You would think overall that might be good news for the population level impact, but it turns out that's actually bad news. And I will also unpack that a little bit more Tuesday or Wednesday. I'm not sure which day I'll do it. But even conceptually, I want you to think about it. If everyone had an asymptomatic case, or nearly so, we'd be back to common flu, like mild cases, common cold, mild cases, and we wouldn't have any fatalities, though many, many people would be sick. If everyone had a severe case and transmission was linked to symptoms, eventually we would notice that person is coughing, coughing denotes the onset of transmissibility, you could isolate, and it would be easier, even though the outcomes per person will be worse, it might be easier to stop spread. An asymptomatic route means a number of people feel fine, transmit, and they become routes of transmission, even if they end points are other people who are symptomatic. I will talk more about this next week, but now you can see why people cared so much about strength estimations, because strength implies size with IFR implies a major problem. I don't know if that's seeable, yes it is seeable. The problem though was that this notion of a disease that goes up and down, the response of course were lockdowns, there were no other pharmaceutical interventions available, so it was non-pharmaceutical interventions, and these disease dynamics tend to have the characteristic that they go up and down. This led to some serious confusion. In the United States, the leading group in terms of being recognized or being well known in the both modeling, non-modeling community, especially the non-modeling community, is this IHME model out of University of Washington Institute for Health Metric Evaluations, funded by the Gates Foundation, and they made predictions like this many times over, so and I've been on the record of saying this, they took desperate day in April and looked forward and said by mid-June, COVID over in the US. They did this many times, they said this many times. One of the reasons why these predictions were made is in part because if you take these simplified models, this is all they do, they go up and they go down. Now you could drive them with some external lockdown and say temporarily they're going to go down, but when you release that lockdown they go up and down again. They don't do many other things. It was actually worse than that, they didn't even build these kind of models into their prediction tools. They went back to something which was called Farr's Law, which is they see that this looks pretty symmetric, and so instead of building a model it's kind of complicated, they use what everyone should not do, the IRF error function. They use the IRF error function. Why wouldn't you want to use the IRF error function? Well, because that's insane. They discovered, and I didn't include the rest, that the IRF error function fit this part very well. Well, there's a problem with assuming something like an IRF error function or any kind of function that you have no idea what's going to happen in the future, but you're just assuming some kind of symmetry, and because of symmetry it must go down symmetrically, and there's someone called Farr, had Farr's Law, which isn't really a law, which says that you tend to observe in outbreaks often of smallpox, but in other cases, and this is all pre-modern times, that these were symmetric. People invoke Farr's Law in early stages of the HIV epidemic to say that I think HIV would be done in the mid-90s. Again, just assuming these functional forms. You should also notice, if you've ever done error analysis, these are the cones of uncertainty. Well, first of all, that's a bit insane that the day that you make your next petition, you have maximum uncertainty, and the uncertainty only goes down with time. The future is never that certain. And it's good that we can maybe laugh a little bit with some reflection, but this was being used actively in the United States to justify policies, which wasn't very funny, and it wasn't the only example. This was done by professional models, a non-professional modeler used the cubic function on Excel to predict a date like three weeks out, again in that summer, that it was all going to be away. So buyer beware. What has actually happened is much more complicated. We're not going to get to the bottom of it in these four days. Not possible. Not possible because I only have four days. Not possible. Not for me, not for any one person. There's still a lot we don't understand, but clearly these dynamics, and I've chosen a few countries here, we've seen waves, we've seen evolution, we've seen behavior change. All sorts of things have happened, and certainly not even at country scales. Within countries, very different outcomes have happened at different times. So what I'm going to try to talk about today, now that I've given you this introduction, is how to use models to project the value of responses in the absence, especially in the absence in some sense of the presence of vaccinations. How do we develop principal theory that's also useful? And it's an issue at the start, it was an issue with Delta, an issue with Omicron, unfortunately, it's going to continue to be an issue moving forward. So I told you verbally what my plan was, I think I'm still on track for my plan, I needed to give a brief introduction to some of the main concepts. And what I'm going to do today is try to tell you what theorists in theory can actually do to help an appendix. So the next sections are going to be much more applied. They will not involve as many cool equations as you often tend to think are required to make you happy in a physics lecture. Nonetheless, it's actually what people do. And in doing it, there are some things clearly that we don't understand that then inspires some interesting theory. I will now take my time on Tuesday, build stuff up, and on Wednesday get into some more features, both frailties and fixes to address some of these simplifications. So Tuesday and Wednesday, I'm going to try now that you have some stuff in mind, you're more experienced in the ways of how epidemic modeling and response actually works. I think you'll view those less in a mathematical light, but more in a relevant public health way, rather than just front loading a bunch of math and just telling you eventually it'll be useful. Thursday, there'll be a digital hands on component in which I'm hoping to do something like last week, but hopefully teach you some new methodologies. How would you simulate stochastic versions of outbreaks? And how would you compare them to the mean field predictions? And you'll see that they work on average, but not in every particular case. And finally, in class exam, administered by Jacopo Matteo. And that again, it's a wonderful feature I get to teach the class, and I'm not even there for the exam. So if you have complaints, talk to Jacopo and Matteo, not me. They'll be there, not me. But it'll be fine. You'll be fine. I'm not so worried about it. The only other thing I want to mention before I move on is that for those of you taking the course for a grade, on Thursday, I need to talk to Matteo about how I received these notebooks. I would like a notebook from everyone from last week and even whatever you do on that Thursday. However far you get, I just want to see two notebooks. So let's give you a grade. I would much rather grade it on your effort in the notebooks than on your performance in this exam, not to say you shouldn't do your best in the exam, but I'd rather have some other piece of evidence. It allows me to give grades that are less reliant on an hour and a half and show that you've put some time. Okay, so I doesn't have to be all the way to the very end of that prior one. I just want to see notebooks that you've made some effort to try to tackle some of the problems. Okay, any questions about this plan? Yes. Yes, on Thursday, one of them will be from the one before. The one on Thursday, you only have an hour and a half, whatever you get done that hour and a half. I just want to see that you were actually present. I have some record that you did something you tried. The other one will be more elaborated. You can work on the other one on your own time. Don't send me the notebook. I will give you the answers and you can compare later, but I'm not going to grade it after the course. Okay, and also I don't want to add on, this is the end of the course. I'm not going to add on homework after the end of the course. Done. Is that clear? I'll write it in Slack so it's very clear. Okay, good. So what am I going to do in the rest of the talk? I'm going to tell you about what we did and we is actually a very large number of folks in three different areas, knowing some of these things about how epidemics spread, but also knowing some things about the limitation of models. What did we actually do? And I can assure you, we didn't just sit around solving different versions and finding these nice mathematical answers and claiming success. So I'm going to tell you about some efforts we did to explain spread and risk, how we built the testing program at Georgia Tech and why itself was based on some theoretical ideas and also went into some of the issues of what does it mean for and how do we interpret when pandemic caseloads and desks go up and then down. Does that mean we're done or in fact does it mean something entirely different? Okay, so let me go back to this notion I've been talking about R0 which is supposed to relate to final size and yet the reality is that we have values of R0 but there's also something called dispersion which is that one individual if my R0 is 3, I have this period of time and I have a rate we could imagine that we have a rather narrow variation in terms of the number of infections caused by any single infectious individual because we have a rate times an exponentially distributed period of being infected, we don't have a long tail. But yet there was lots of evidence very early on. From the very start in America and I'm sure there were events unfortunately in this region, Lombardy as you all know is one of the hardest hit regions is not that far away, in the U.S. very early on there was a case of a choir practice in which 60 or so people showed up outside Seattle, everyone felt pretty fine, they even had some preventative measures, it was around the time that things were spreading, they're supposed to stand apart but they were singing for a while and they were physically distanced. Nonetheless, I think nearly everyone in that choir practice they did it twice was infected and then it propagated hours into families, there were multiple fatalities in the group. Later on that year people began to collect and notice yes this choir practice here we have 52 infected 50s dozens of people infected from a single event. This is not consistent with the notion of an exponential period in which they are not as three you're never going to get 1020 fold not at least not this frequently of these large numbers of cases. And part of that is because the way that covid spreads is not necessarily just through an individual contact but because when you have larger groups a there can be potentially more contacts concentrated together and also because of the airborne nature of spread. So you're in a large room even if you're not physically you know shaking hands or right next to a person without a master can still be large number of cases. So we were very worried about spread and we were very worried about this early on. Here's March 10th US daily cases it's not even on the map here right it's not even on this graph you can barely even see what that is. There's almost no documented cases and yet that same date for various reasons I had been worried that the number of cases that have been documented were severe underestimate of the actual cases. And I made this graph in part I made this graph I'll let you sort of look into it for a second then I'll explain it. A friend had asked me should they go to Atlanta United soccer game football what you call calcio here in Italy and in Atlanta we have a lot of people like soccer. In fact this weekend there was I think 50,000 people at a soccer game which I know sounds crazy that that many people show up in America to see soccer on a Sunday but they do nowadays and I was worried that with so few people aware there were no precautions being taken and it seemed like there weren't that many cases nationwide in fact kind of back in the envelope calculations suggested maybe there were two someone team two and 20,000 circulating cases the entire United States of 330 million people. It is very hard to conceptualize what that means so what I tried to do was imagine what is the probability that one or more individuals may be infected in a group of different sizes a dinner party wedding concert hockey game a basketball tournament that was supposed to happen later in that week which had 100,000 people come into a watch a basketball game 80,000 but a lot of other folks around and then on the y-axis say well how many people are infected and then make a terrible kind of assumption but it was only assumption that one could make at that time. Imagine though I sprinkle those two to 20,000 cases randomly amongst people so I make a homogeneous assumption and then just ask what's the chance that no one is sick and that's just if the probability of being sick is P one minus P not sick one minus P to the large number is no one is sick one minus that is one plus sick everyone follow that little logic as I wrote equations in the air no only me if I have P probability of being sick infection probability right so this is something like the current prevalence one minus P not infected one minus P to the end not infected group and and individuals those affected one plus infected is everyone following simple right this is not a very complicated equation and you end up getting these isoclines right the isoclines are the fact that if you have a small number of people in high risk that's the same in terms of the chance that one plus is infected the large number of people in very low risk and the point that I tried to make is that even then when there seemed to be no case in the United States effectively there was still events that it became a pretty good chance that someone in that crowd once you got into stadium level crowds would be infected and even its smaller gatherings which meant that the chance that one case could become many was quite high now I don't have that many followers on twitter and yet you'll notice at least in a twitter sense this was viewed by hundreds of thousands of people and started to put on blogs and all sorts of things started to happen with respect both to the attention people focused on closing events but also obviously in terms of the number of cases which went up very rapidly very quickly it says a year ago sorry it's two years ago now into thousands tens of thousands per day and even that is severe underestimate for the actual number of cases okay and in fact if one looks in the U.S. at least to see how many cases were documented in the first year there was about 16 million reported cases the CDC estimates about 80 million actual cases meaning one case was ascertained for every five actual cases in Europe and in Italy particularly I think the number is probably closer to three one in three but we can have that debate some other moment so there are a lot of unacertained cases and so the question was there was the early evidence from this choir practice we are worried already that large events could lead to more spread because if we have this even low probability but if there is a gathering then you can see all sorts of ways that this can go back right we stopped following this notion of I'm having a limited number of contacts my probability of transmission but rather you can get large events so how do we translate it so again what is the chance that one or more individuals in a group is infected small group maybe not one larger group yes and then obviously the risk that infected goes up very rapidly and we can estimate that from cases and ascertainment bias okay so we have something that we see the cases but we infer ascertainment bias and there's some ways that we do that and I don't have that on the slides but I'll just point out in a few well actually I have some I do have it in part of the slides let me await in a moment and then I'll explain another mechanism the problem was that this initial tweet that I made and then a lot of people are interested in it what should we do about group size what is a safe group and of course it's very hard because this equation doesn't have a number or a point that says one level then is safe and the others unsafe there's a continuum of risk that spreads but obviously the point was that for many people certain kind of events might not have been necessary and those events tend to have larger end and I'll get back to that later and maybe we should cut those out very rapidly because those have the chance to move not just from a household but between households between groups and so on myself and a number of others started to work on ways how could we communicate this with maybe a let's say a graph that would be more appealing to the public rather than these isoclines on a log log plot right which got some attention but wasn't quite reaching a number of folks and also I had to make it was very bespoke had to make it every day do a new estimation it started to take up a lot of time so I ended up working with Cleo Andrews here's an assistant professor at Georgia Tech who does GIS another kind of regional urban planning work and we decided we would actually turn this into a map in which the color intensity says the probability of that one or more individuals may be infected depending on the event size that the user can change and as you change this the colors change and you have some estimates of ascertainment bias and we included a few reflecting uncertainty it turns out that this filled a gap in the United States and actually I think globally this gap still remains most people have no idea what the risk is when they're going out the chance given current case counts which are often reported in you know one new cases per 100,000 right people don't even know what to do with that number or their color codes that again people ignore after the United States we often go through various color code things for various security things and eventually people ignore all the colors maybe here in Italy you pay attention to it I doubt it most people tend to ignore those color coded things so we wanted to give information that was more relevant we launched this on July 7th worried about super spreading worried about these risk calculations and the first day we had about a half million people show up because it got some news sources hit some media and spread thankfully that calmed down because that was insane also it's a lot to try to communicate eventually we've ended up having more than 16 million visitors to the site 60 million risk estimates and it has also as I'll show in a moment spawned similar sites across the world this changes with time and that was the other reason why having these cases and sub-state level distributions you can see you have intensity there a little bit up here in Seattle and over the course of the first summer it goes down and moves regionally from the northeast down to the south and intensifies throughout the south and southeast so it gave people a real-time view of where disease was spreading and also the way that it linked to behavior and just to give you a sense that we kept this going and we have I'll show you in a moment we've continued to have it go even now and that I'm sorry about there's some crazy conversion I'll just explain what that does on the right for people might join late my laptop is not working I've loaded this on a Microsoft machine and it changes things why are large gatherings problematic increased likelihood someone has COVID-19 for that reason more potential interactions you can imagine if everyone's shaking hands we have n squared and certain cultures that is what's done both when arrival and departure but also because we have the potential for airborne transmission and spread it's also hard to contact trace who are you next to in that big event and I think you've all probably heard what was it the lasio Milan game that was or lasio no atlanta lasio which was not just the event itself it took place in Milan in the stadium but it was lasio atlanta very early on but it was actually the gathering of people at bars and parties and so on that led to the acceleration of spread throughout Italy on the right if it showed up correctly the problem is that we're never quite sure how many actual cases there are in a given moment because we have this ascertainment bias so what people have done and you asked me before about recovered is instead of using viral testing to measure infected individuals you can use serological testing to see how many people recover and then compare the serological tests versus the accumulation of viral tests and see what the difference is and this is how people began to understand that we were missing a lot of cases we knew it but we didn't know how bad the problems there are other ways too one of the ways that we found unfortunately was to look at hospitalizations and fatalities now and say how many cases three weeks ago would be necessary to explain the incoming hospitalizations and fatalities assuming something like a little bit less than one percent infection fatality rate which you often found there should have been many more cases to explain the current mortality rates and that's another way to get ascertainment bias okay so I think I've explained that I'll just mention a few other things here we continue to do this and we've expanded into Europe for quite some time we've had some partnerships with folks including folks in Italy who continue if you're interested it's event COVID-19 you can go to a website and look at actually risks of an event like this and obviously there's a green pass here and it's the same people but nonetheless a random selection of about 50 people you can look up to see what the risk is that one or more be infected and also some folks in Mexico have taken the same idea and done this Jorge Lasco Hernandez I believe the other thing I want to mention is we've tried to measure impact so yes a lot of people have come to this site it has taken a lot of work we now have sort of like a whole product development team we have people who do our marketing and make ads we have people collect the data we have people who have to do the back end I mean this is very much unlike you know you're here at the spring college for physics of complex systems and yet I'm telling you about something that happens when you take an idea and push it out into the real world you end up being faced with lots of other challenges the thing that we were worried with from the outside is okay we've had many people come to the site what impact does it have we have an intended impact but as you know you can have 300 million people see a Kanye video right Kanye you know has a cereal you know Kim Kardashian drinks a coffee it doesn't matter you can have hundreds of million people watch that you can also have hundreds of millions of people listen to misinformation so just because you're reaching a lot of people doesn't mean you're reaching it for good or bad reasons you're just reaching a lot of people so we tried to see what was the impact there and working with a team of Duke cognitive neuroscientists we started to ask questions of people visited the site which itself took a long time we have to get approval from our universities to ask a little question on the website and what we learned is that after viewing the website people tended to not change their mind about their willingness to have small events there are things that people want to do and you tell them while there's some risk they're not going to change but maybe there are some things that are not as essential and what we found is that for the large events people after viewing the site tended to be less willing to attend those which was what our intention was is there a question in the back it's okay if there's and then we did something else which they had shown I'll just complete this idea and I'll just ask the question we asked people to do an imagination exercise which they had shown in the lab could have effects imagine that you're in a coffee shop imagine a grocery store movie theater graduation and we asked them to guess what we thought the risk was in their area and then we told them the answers we said you guess this this is what we think it is and now what would you do and what we found is that people tended to think there was a higher risk in small events than there was in a much lower risk in large events than there was and when we asked them what they would do we found that the people who tend to underestimate risk actually became less willing to attend events so here we have this intervention nudge that potentially is reaching millions of people now we've done this on tens of thousands of people we're still trying to figure out its impact but you can see that awareness of this disease can actually change behavior and that's something that I'll begin talking about but yes Jacobo question yes so you measured how much the the viewing the the statistics change the willingness to participate in the event but do you have a measure of whether they were willing to participate the event before going at the so perhaps I'm not changing my opinion but I was already like in doubt and probably I was already deciding not to go right so we don't have let's go back here after viewing this are you more or less willing so we don't know for example there are some people who are not who are basically already over estimating let me go back over estimating risk so let's say they're in that category that's why we broke it down into pieces maybe this will help this is about the evidence I have for the over estimators they're not willing to change they were already thinking that COVID maybe was worse than we think it is in terms of risk and we tell them they didn't you might have expected they might go out but they're no they're not willing to do that so we we are asking them relative because it's more or less willing so we have we don't know what their baseline is we it's not an absolute measure it's a relative measure and we also aren't following them or like there's not a way to actually see if they fall through on this that's obviously beyond the scope I understand that is a relative measure what I was asking is what is that I mean there is a relative measure measure of the what they want to do but then there is a measure of what they did actually right so you don't you don't yeah that we don't have yeah we don't in any behavioral interactions for that we'd have to have long-term longitudinal study these are visitors to a website so it's beyond scope I mean the interesting thing here though is in this space as far as we're aware this is the first time anyone has actually demonstrated even an intentional change that we could do in intervention because most of the time these are just released I assume most of you've gone to some kind of dashboard COVID dashboard at some point in the last two years right but for the most part of the people who are providing this information including the state have no idea what it does have little to no idea what it actually does okay so I'm now going to move to something else which hopefully will change gears a little bit I have two more of these vignettes so this first part has been a large-scale effort now by a team to try to explain risk communicate in an effective way but also trying to measure the impacts so now I want to switch to the other thing that I've been spending most of my time on for the first year and a half or so the pandemic or at least maybe the middle section which is how can we use testing as a form of mitigation I assume all of you have taken many maybe too many tests not of the tests from college variety and I assume most of you did nasal swabs is that right raise your hand you've done a nasal swab many times yes has anyone ever done saliva based testing only a few of you this saliva based PCR testing okay yeah so the difference one is in the nose one you spit into a little thing and you can use a droplet to put in or you shake it so in the spring and summer 2020 and the situation may have been different here certain universities in America basically went fully online Georgia Tech is in a southern state it's a public university and it was clear even though we shut down in March and then went online for the rest of the spring term the push was be for us to be open by August and we were open by August in the sense that students were back living on dorms on campus the classes were mostly online but I was still in my office and many folks were coming in we wanted to be able to do as much as we could given the fact that they were there to try to reduce spread and my colleague Greg Gibson decided that if we were going to try to if we had people there we had to try to reduce spread or at least get a sense of how things were and it would be much better if we used a saliva based system for repeat testing in other words spit into a cup tape droplet put it into a small rage and fold buffer shake it give it to someone who's going to give you the result the next day for free rather than have it administered nasal swab he and I chatted and we were going to he was initially thinking about this as being used really to get a sense of what the conditions were on campus but after some discussions and a little bit of models I convinced him and then we convinced the university as a whole to make a much bigger investment to actually use testing as a form of mitigation to use many tests not to be an indicator of how bad we're doing but actually to improve outcomes so I want to explain the very simple idea behind this proposal here I have another compartmental model an seir model at this point after this class I think you probably given just these arrows could write down the rest of the equations but I'm not even going to do or not maybe I'll do it just to make sure I understand the concepts so let me just explain where I'm embedding myself and then why I'm focusing on this I turn so you can imagine we have our standard I stands for infectious e for exposed but not infectious so when people become infected they move into this exposed category then over some period they transition from exposed to infectious and then they recover the first one is e mu times e yes of course sorry so we have infections exposed to infectious infectious to recover there's another way to move people out and now if we think about recovered or removed not removed because of fatality but we remove them from circulation so they don't infect others is that we could test and we could test at a certain rate and we could also test using a test which has a certain sensitivity a probability of returning a positive outcome if there is in fact a positive case and that would also remove individuals from the eye category I'm not even going to worry about where they get put into an isolation and then maybe go into the r-class and I could put it here if I wanted to but I want to put in parentheses to note that they haven't actually recovered it's just they're no longer the infectious class right if you focus here what you can see is that we have a competition between two processes to pull people out of the eye category we have recovery at a rate gamma we have an effective identification at a rate omega s e and of course in reality there are delays and I'll get into that in a moment so the speed terms of sensitivity gives me my effective rate of identifying infectious individuals in testing I have two rates we've already talked about the probability that one happens before the other omega s e over omega c plus gamma says that's the probability that I find you by testing before you recover and in doing so that means that I'm reducing your infectiousness by that probability if I don't do that then you just have your normal level if this were approaching one then basically I'm always finding you before any effective transmission has happened so if you take that idea and say what happens if we have a speed of frequency of testing that gets close to gamma and remember gamma has this infectious period which was on the order of about a week I showed you that before it can be less but obviously that's not a bad overall term if it's about a week then if I test on the order of a week I start to compete with the infectious period if I'm testing you once a term once every 15 weeks it doesn't matter I'll learn something about prevalence but I'm not going to make a difference in terms of reducing transmission if you take this kind of model simulated with different values of r not different testing frequencies in a community of 15,000 75% sensitivity 90% sensitivity left and right which you can see are these gradations contours from 10,000 infected down to hundreds and you can see also that if we can get things below about a week and we can do work so that we're driving using other measures like mask wearing which were we fought against our own state to make mandatory so you have to keep in mind that people have been working in very different atmospheres with respect to the relationship between political leaders public health and implementation in our state we're told we had to go back to campus but masks were not required you couldn't force it and we said that it's impossible we won that bottle one year the next year we lost the battle but this year we won it so we had masks there was online classes our hope was that when we actually tested at scale and we did entry testing to reduce the fact that people show up already infected it could significantly reduce cases and the program became basically people showing up getting one of these little biohazard kits a little spit bucket it took about a minute to two minutes maximum to go in and out spit into this container take dropper and even i a theorist could do it in a minute to two minutes put it in the other little buffer container write your initials put it in this bag wipe stuff down get results back on a website the idea here was not as many people did it that it's essentially viewing tests as being something bad and just sort of telling you how bad our response was but actually using more and more tests to intervene and identify people who were asymptomatic especially in this age group rather than waiting for them to get symptoms it turns out that this worked early on in the return we had a spike up to about four percent of our asymptomatic testing program returned positive but we're actually able to figure out where those were we knew who those people were where they lived in terms of campus housing we're able to intensify our testing and keep things below one percent for the entire term and i'll just point out that a similar university in our same state so had similar background rates had twice as high peaks and left and stayed at four percent for most of the term i also want to point out that heterogeneity was expected all this modeling work is homogeneous we knew and this is the limits to models we built these models to justify the choice of large-scale testing but we didn't just sit back and say well i assume this is going to be homogeneous we don't have to look anymore instead we knew that there was going to be heterogeneity but we just didn't know which place there was going to be heterogeneity so i'll i'll be honest here that other universities built very complicated models of heterogeneity and we just decided we would just look because we didn't think a part we were going to know which coffee shop which fraternity house was going to have cases we would just observe and use our observables to try to do feedback and this continued into spring 2021 and really became a bridge for us to drive a vaccination campaign right so this large-scale testing became a form of mitigation not a reporting index of the overall state of the disease okay i have time to explain a little bit more here on what we did with feedback and i have one other concept now i have a seir model but i have an explicit iso class for isolated and then we presume they're recovering you can take such kind of dynamics run them on networks and ask questions what happens when you do certain kind of tests and also what happens when you use certain kinds of test strategies what do you do with the information do you just isolate that person and that's it or do you actually do contact tracing try to figure out who that person was connected to and test them preferentially in practice at georgia tech we there was some contact tracing but we also use something called localized testing which is if you were a roommate of someone you tested positive unfortunately the roommate three days later often tested positive and the door made tested positive more frequently people in the hall a little bit more frequently so we already knew that you didn't have to go and call them say who did you interact with you had a proximity you were close to other people and although we didn't know the identities of individuals we could redirect the testing to that area so what we can then do is take this conceptually and say imagine we have a regular network i know it's very hard to see and i apologize for that with the contrast just envision that the lines are connected here and here they're sort of randomly distributed throughout the network and again i'm sorry there's some sort of issue with the conversion a random testing is very easy you test people every 24 hours a fixed number not every one of the fraction they get isolated if they're positive and so if you have an infected node you're not going to preferentially test their neighbor because you're just doing random testing okay so you randomly allocate your resources you can also use localized testing you test people on the day when you get isolated then you test preferentially the neighbors right and you keep doing that in other words if you have this red node you're going to be more likely to test those blue nodes because they're close to the red zone this is sort of like the dorm the roommate the hall here in the other article right you're going to preferentially test nearby and then finally contact tracing you're going to go and have to talk to that person who they interact with it's a little bit slower and you end up having these longer links so some of them are local but you're also going to look farther on the network so i understand sort of how people these are sort of three different classes it turns out if you run these stochastic models then you have to worry about delay and specificity how good are you at covering it's a different sense of specificity the probability of testing the contacts after they test positive so a random test strategy doesn't know anything really about the contacts you have very little chance of actually catching one of those contacts localize some contact tracing potentially is very high but here there's a bit of a delay that gets built in because it takes time in practice and here we're assuming a very fast test turnaround if you assume slower test turnaround you're in big trouble right so speed not just for testing but turnaround matters i want to just share two results that give you a sense of what actually happens when you take these models one of which just like i showed here speed matters once the number of tests per person per week gets to be on the order of one this is no intervention in a very small network of 200 this is sort of like imagining a dorm sized event you see a large outbreak can be reduced right almost 75 percent just by testing at scale if you don't test that often you approach back to where you were buffing so the here the number one issue that we've tried to emphasize and that in some sense the rapid test and how people are using which are supposed to make it faster results though they're not necessarily as sensitive as we would like but they're very specific so the positive means something but the negative you're never quite sure what the negative means on the rapid test nonetheless the speed matters you get this overall advantage so even though it's not as specific it's really speed matters a lot that's one answer the other answer that's interesting but subtle is that if you move from regular networks to random networks then this contact tracing strategy rapidly becomes the best strategy but when you have localized networks very strongly ordered networks then there's two things that happen first of all the outbreak size goes down does everyone see that the blue curve so if you have a regular network even without testing it drops quite a lot say we don't have any idea why that would be here there's not even any testing you notice the blue curve is the outbreak size without an intervention yes they interact with not with everyone they interact with a subset so imagine I have here this person interacts with these two in a regular network I'm more likely to close triangles so if this is the first person the second person to infect right then I've already by the time this person's infected there's no more people to infect because they share these networks so you actually self limit in many ways so you already can see that structure network structure or spatial structure can self limit but on top of that then this localized testing becomes as good it's really the random networks where once you have larger events going outside family groups outside localized structure that this contact tracing begins to be important but the key is speed the secondary issue is the way that you approach it but we've sort of underused a speed approach being so worried about doing things perfectly but in fact the speed is probably the thing that's driving it the most speed matters and you can have improvements when there's local information okay I'm almost done there's been a lot today that I've gone through I'm trying to give you the grand overview of what theory and theories can do in practice so let me wrap up a little bit with behavior if you recall this slide I explained these final size relationships early on and that there's this notion of the herd immunity threshold in which people are depleted for this reason there's overshoot and you get this final size you measure a speed you infer strength strength implies size a stronger disease implies a larger outbreak this is limited in the example I just showed you when you have spatial structure you don't reach everyone and you can't use that assumption of well mixed models in fact there can be a number of differences particularly due to dispersion which is by rank the proportion of infected cases the expected proportion of transmission here it's homogeneous in these other cases a small proportion of infectious cases are responsible for a large proportion of transmission so we have these kind of if you've seen these before like laughter like curse for wealth inequality that a small number of individuals have a very large fraction of the wealth here a small number of individuals are infected have a large proportion of the transmission and different diseases have different characteristic ways in which you can think about the extent to which we already knew SARS this is SARS-1 tended to have this feature a small number of individuals infecting a large number of other folks it turns out when you have dispersion first of all that says that you need to chop that tail somehow reduce super spreading events which explains why I was so concerned about risk and events but it also implies and the reason that I will explain more in detail on Wednesday that the herd immunity threshold can be reduced this table and this paper basically said if we have this homogeneous structure here we have herd immunity levels 50 62 thirds it's just simple mathematical relationships one over our not it's nothing other than one over our not and by the way this is a science paper so now you're understanding all these science papers now if you have age structure so people tend to assortatively mix young with young old with old you already have a change because we've broken the homogeneous mixing but if the activity differs if how active we are in terms of transmission changes then what you can see is these have the effect of lowering the values so lowering the level which disease might saturate I will give the intuition behind this on Wednesday why is it that heterogeneity tends to slow things down I'm going to start it today but I will build it completely the problem was that some folks recognize this very early fit the data compared to what they would have expected with a classic model and reached a conclusion in early 2020 I don't know if you see the words HIT HIT HIT HIT herd immunity threshold of 6 percent 11 percent and 21 percent because if you take the heterogeneous models too seriously you can reach this conclusion that actually the disease will be done because you'll sort of run out of these local nodes and it will be done just like the simple mathematical model has problems it predicts too large a number these herd immunity models with heterogeneity also predict too small in them and they misinterpret what is happening they're mathematically elegant we're on the final size ones these are also mathematically elegant in some ways but they neglect the fact there can be reintroductions right it can come back into a community it starts somewhere else in the local community but also there can be behavior change so maybe I will well no I'm not going to have quite time to say enough about this let me just start it if you think and this is all I'm going to say here is that if you think about a distribution of your susceptibility to infection and I'll talk more about this on Wednesday we have a mean susceptibility and we think people who are more susceptible more likely to be infected and those with a lower value less so here people are self isolating here they're going out and about the entire notion of why heterogeneity can actually change the outcomes is explained in this diagram mathematically I have to unpack it more on Wednesday here's the mean susceptibility so everyone get that now imagine you draw someone from the population with this distribution which means what is the average susceptibility of the next person to be infected keeping in mind the susceptibility is scaling your likelihood of being infected so I'm taking this multiplying by the thing that goes up and I get something that looks like this where the average susceptibility of the next person to be infected is twice as susceptible as the average current susceptible what does that mean it means that this susceptibility distribution is pulling out the tail this tail is going to be pulled first if you're confused by that it's okay I can't explain everything today I promise to explain this again in depth on Wednesday some of you probably caught it just now some of you did not that's okay I will go through this again on Wednesday and really explain the result is that over time the infection process doesn't just deplete susceptibles it actually changes the structure of the susceptibility distribution people who are more susceptible get infected earlier moving it down because we have the number of people are being infected or less but also moving it to the left this distribution and it turns out this has consequences including reducing the final size why because all the people who tend to be more susceptible have been infected earlier and effectively now our are not has gone down because the people are left if they don't change their behavior are less vulnerable okay I have to skip some stuff I can't even tell you about all this cool stuff about behavior I'll pick it up a different day all right there's too many things I put too many things into this talk I'll continue this tomorrow and Wednesday so I just want to go over some of the things that I talked about today I'm running out of time first we were worried and recognize that because of asymptomatic spread there was this issue of large spreading events so we built some visualizations that connect current case counts to risk recognize that we didn't have to be passive about how COVID might spread but rather use testing as a form of mitigation and this other one I'll just have to pick up and maybe I'll even pick up tomorrow before I launch in I got too close to the edge here with part three thinking about ways in which these models miss key features heterogeneity we're not all the same this is a homogeneous model and also behavior that we don't just keep doing the same thing throughout the pandemic even before society and public health leaders impose lockdowns we start to change this value of beta on our own and so the last thought I'll leave you with today and I will build up tomorrow on Wednesday is we do need non-conventional theory beyond these sort of strict SER models in which like this if I remove that all this is is talking about the etiology of the disease how what is the disease due to the person rather than thinking about interventions in the way that the awareness of the disease changes and also by heterogeneity the other thing I think is key is that we need better public facing instruments some of you might never want to do this right but if you actually want to get into the public health or or viral epidemic kind of field that at some point if you've built these theories you'd like to know are they relevant if you're in physics and you want to know what you've done is relevant you do an experiment right or look in an experiment or talk to an experiment but if you're in public health then we have unfolding epidemics we want to see can we actually make a difference not to show the world that our prediction of catastrophe was right but rather try to use models to avert bad outcomes to actually make a difference okay and one last thought today's talk I would say it was trying to inspire somewhat for you to think about ways that you could work not just up there in the upper left like bore doing pure basic research for its own sake and I know since you're in the spring college you might not want to turn out like Edison doing pure applied research and who cares about the rationale but something like pester which is use inspired basic research the idea that there's a purpose that's right in front of us that inspires our basic research and this becomes a virtuous cycle of trying to see if what we're doing can make a difference in the world and with that I think I'm at time you have coffee break your next lecture to come I'll see you tomorrow morning I'll pick up in part three hopefully with my own laptop so thanks okay thanks