 Good evening and welcome to the impact of information and misinformation on mental health and well-being during the COVID-19 pandemic, a webinar produced by the Australian National University in partnership with the Black Dog Institute. My name is Bruce Christensen. I'm a clinical psychologist and the head of professional programs in the research school of psychology and associate dean of culture and well-being in the College of Health and Medicine at the Australian National University. I am pleased to be your host and moderator for tonight's webinar. I want to begin tonight by acknowledging the traditional custodians of the lands we are meeting on, the many and diverse First Nations of Australia. I pay my respects to their elders past, present and emerging and extend that respect to all Aboriginal and Torres Strait Islander peoples present tonight. The impact of COVID-19 and the associated restrictions designed to thwart its spread has been deep and truly global. We have found ourselves isolated from friends and colleagues, worried about the safety of ourselves and others, adapting to the stress of working and living differently, coping with job loss and financial hardship, and grieving the loss of loved ones. The broader consequences have been profound. The United Nations projects that those living below the poverty line could increase by 580 million and that hundreds of thousands of additional children will be at increased risk for experiencing domestic violence and early death. Researchers have estimated that the direct cost of COVID-19 to the USA alone will be over 153 billion. Moreover, these impacts will be disproportionate among our most at-risk groups, including those living in poverty, the elderly, persons with disabilities, children and youth, and Indigenous peoples. Against this backdrop, it is not surprising that the pandemic has also drastically affected the mental health and well-being of our communities. The Black Dog Institute estimates that one-third of Australians are experiencing high levels of worry and anxiety. Other common responses include panic, depression, anger, confusion, and uncertainty. Such experiences challenge our coping skills and underscore the need for new tools, expert opinion, and trustworthy information. Today, information is bountiful and it travels quickly, but the curatorial role of journalists and mainstream media has decidedly shifted as they struggle against constrained budgets and the lightning speed at which competing information propagates, leaving individuals on their own to sift through the mountain of information and misinformation. Sylvie Brand, director of infectious hazardous management at the World Health Organization, points out that epidemics are accompanied by a tsunami of information, as well as misinformation and rumors. The problem of rapidly spreading misinformation has prompted the World Health Organization to suggest we're not fighting just an epidemic, we're fighting an infodemic. So how can one understand information, its credibility, and its utility? And how can one use information properly to bolster mental health and well-being during crises such as this? To answer these important questions, we've assembled a group of experts to talk about how to separate fact from fiction, how information travels online, the impact of information and strategies to manage this and the challenge of science communication in the context of COVID-19. We will first hear from our panelists and then turn it over to you, our listening audience for questions. Your questions can be sent to us using the questions and answer box on Zoom, and I would like to also make our audience aware that tonight's webinar is recorded. And the questions that you ask will be saved. Tonight, our first speaker is Dr. Erin Newman. She has recently co-edited a book on the psychology of fake news, contributed to policy discourse on misinformation, and contributed in legal context to address biases and jurors decision making. Her talk tonight is entitled, Separating Fact from Fiction. Erin? So thank you very much for the introduction. It's a real pleasure to be here tonight contributing to the webinar. Today I'm going to be talking about how we tend to separate facts from fiction in the brain. And then I'm going to be talking a little bit about how certain misinformation correction strategies may be ineffective and they actually end up increasing the familiarity of misinformation. Then I'll end with some more positive tips about engagement and how we can increase the chances that the truth sticks and that myths fade. So as you'll hear from a moment from Dr. Rob Acklin, there are several ways in which we may end up encountering information online, whether that information is actually true or false. And what has become particularly clear in the context of the coronavirus pandemic is that there's really been a wave of misinformation, supposed cures, preventions, misinformation about how coronavirus came about in the first place. So as the tiny green human here on the screen, our job is to effectively handle which information is true and which is false and keep it a nice, tidy piles. And so when you have a look, there are actually some external efforts to help us keep track of what information is true and what information is false. So for instance, on various social media platforms like Facebook, they've tried to handle misinformation by removing it off their site. So they try to take blatantly false information and remove it from circulation. But a much more common approach that we've seen over the years in the mainstream media is to work to lift the profile of misinformation, making it the sort of star of the show in order to increase misinformation for the general public. And I'm sure many of you are familiar with this type of article, right? So they're usually called sort of myth versus fact articles. And they start off with some sort of bold claim like this one here, myth one, some outrageous claim involved. And then following, there will be a very detailed and thorough explanation for why the science and logic behind it particularly is wrong. Now, when you have a look at the research on misinformation and how these exposure to these myth versus fact articles actually works. What we've learned in that area of research is that if I have you read one of those articles and then test you immediately afterwards, you're going to be pretty good at sorting facts from fiction. You'll get those things in the right piles. So from an immediate retrieval perspective, those articles and interventions can work quite well. But as soon as you walk away or move on to the next thing, what you see is that information can become a little bit blurry. And that's what cognitive psychologists or what we call a delayed retrieval context. So what we know for sure is that over time memory for information fades and certainly in an online environment where you're clicking between different types of information. So when you have a close look at people's memory performance under this delayed retrieval context, what you see is that people are less able to effectively sort information into fact and fiction. And effectively what you really see here is that if it's at correction tend to wear off. And there are several reasons for this. I think one really is that the human memory system is not like a live feed of moments moment information being uploaded to the cloud that is our brain. In fact, what we know about human memory is that it's much more constructive. We tend to lose bits we invite other pieces in. So that's one reason why the delayed retrieval context can hurt our ability to sort fact from fiction. I think one of the very important pieces of the puzzle here when you consider an online information environment. It's really about human behavior in this environment and how we tend to engage with information in this environment. And what we know from research in this area is that when people engage with articles like this where there's a bold headline and then more sort of detailed text that follows. While people spend time on the headline, they tend to spend less time going through and scrolling through the sort of more detailed nuance and text that tends to follow. So that kind of behavior one affects what information people end up learning and taking away from a particular article that they engage in or engage with. And then two, it also affects the extent to which we end up holding on to those more detailed issues that follow. And whether that ends up sort of sticking in our memory systems. So when you think about sorting facts and fiction, ideally, right, our brains would work a little bit like this. We have a section detailed sort of dedicated to true tales and one from misleading claims. In fact, our stacks of fact and fiction often look a little bit more like this. And there are a number of reasons why it can get a little bit blurred. I'll give you a couple of examples. So one thing we know from research on truth assessment and human memory is that while you might remember a certain coronavirus myth or a certain coronavirus claim that you encountered, you may not necessarily remember the context in which you actually encountered that claim. So it's quite possible then that you remember a coronavirus claim. Let's say it's a false one, but you actually initially encountered it in a debunking context where you were actually learning that that claim was wrong. So you learned that it was misinformation, but later on all you can remember is the claim itself. So sometimes we don't remember the context in which we encounter information and sort of relatedly what we know about human memory is that when I give you a piece of information like a claim and I tag it as say true or false. Sometimes that those tags along with the information don't stay connected in our brain. So sometimes we lose the tag and sometimes we don't actually recall and apply the tag when we're making a decision. So given this dilemma, how is it that our brains actually deal with the truth assessment? So when we're trying to sort back from fiction in the brain, we engage in what you might call a kind of cognitive detective task. And research shows that people do lots of very sensible things. So for example, we know that when people are trying to assess truth, we draw on our own general knowledge to establish whether the information we've encountered actually matches with other things we know. We also do things like consider the coherence and consistency of a claim to assess whether it has logical coherence. Does it actually make sense? But with a limited cognitive resources, we also know that people tend to draw on cognitive shortcuts when they assess whether things are right or wrong. So when you look at the research on truth assessment, one of the most robust findings there in terms of the kinds of cognitive shortcuts people use is that they tend to rely on familiarity to assess the truth of the claim. And to put it very simply, if a claim feels familiar, people tend to conclude that it feels right. And so this bias to believe information, stuff we've encountered before, is unfortunately not just limited to when we're guessing. So you can imagine a scenario, right, when people encounter an obscure claim and the more you hear the information, the more that claim feels familiar and the more you are willing to believe it. But it's also the case that familiarity influences people's judgments when they knew more, when they knew better, and this is what is called knowledge neglect. So the familiarity bias influences people even when they had the general knowledge that they could have drawn on to assess the truth of the claim. And the bad news is that it's not just a small subset of us who fall familiar, who fall victim to the influence of familiarity in assessing truth. So we know there are no reliable individual differences that make you more immune to the effects of familiarity in your judgments. So to put it in a very simple way, the scientists who study truth really worry about the repetition of misinformation. And in particular, given the powerful role of familiarity and the tendency for people to lose tags about whether information is factual or whether it is a myth. So while reading these myth versus fact style articles seems quite appealing, the rest that I really want to highlight here is that over time the details can fade from memory. And so you're left with a very steamy that feels very. So what would be the take home points from a cognitive psychologist who studies truth. Well, the first thing I think would be that trusting memory to track what is true and what is false is probably a little bit risky. Increasing exposure to myths and misinformation might increase the extent to which false information actually sticks. And I want to call your attention to this graph. So this graph plots misinformation about coronavirus as a search term which has really spiked in recent months. And so what that really tells us really is that people are keen consumers and people want to know more about misleading claims. This makes sense and in many respects it's very sensible to understand the false information that's floating around. What I really want to highlight today is the risk there is and continuing to expose yourself to misinformation and these myth versus fact sort of context. So when it comes to engagement, what should we be doing? I think being engaged with facts and particularly in the context of a pandemic is very important. And so the question is what should we be doing? Well, as you've been hearing from a number of other sources, right? We should be investing our efforts and sourcing reliable information from channels that we really trust where the approach there is to elevate information regarding truth so that it's easy to remember. And as an individual, luckily you have your own sort of what you might call it your own cognitive tool set that will help you to retain true accounts of information that you encounter. So what we know from human memory is that you can connect facts to other things you know, which increases the chances that you remember, especially when you link it to personal experiences and connecting facts to an image or maybe elaborating with facts and in terms of telling a story. You can also increase the chances that you hold on to information in the long term. And I just want to finish by giving you a couple of websites that really are reliant on evidence based updates as a consumer of science and health information in the pandemic. Thank you. Thanks very much, Aaron. Our next speaker this evening is Dr. Robert Acklin, who is in the School of Sociology at the Australian National University, where he conducts research on research and teaching in the area of online social networks. He also runs a company that provides data analytics and consulting services in this area. The title of Robert's talk this evening is how information travels online. Robert over to you. Okay, hello everyone. I'm very happy to be here tonight to talk to you about how information travels and or travels online and why does it matter in particular in the context of COVID-19, our pandemic. So COVID-19 conspiracies are rampant. Last weekend there was a an anti lockdown protest in Melbourne which featured placards relating to particular conspiracies. And there's also a conspiracy video the pandemic conspiracy video that's been doing the rounds recently so those are examples. This information and conspiracies are an example of misinformation can promote anxiety mistrust discord racism and other things. So there's concern that social media social media is contributing to conspiracy theories and misinformation. In particular, the terms that you may have heard of our echo chambers. So this is where people self select on social media. So for example, by following particular people on Twitter, friending other people on Facebook, such that they only tend to encounter like minded views. Then their views are not being challenged because the people that they connect to on social media all share the same views. Another concern relating to social media is that of filter bubbles. And this is where social media algorithms. So these are used by social media companies to, for example, curate information or news stories that you get to view. These are contend to reinforce a user's preferences for information and news. So if I'm someone who likes cats cat images, Twitter figures that out or Reddit figures figures that out. And next thing you know I'm seeing lots of posts about cats. So this is a reinforcement of my preferences. So the concern about the web and how it's it's social and political impacts have date right back to the early days of the web in the early 1990s. In the 1990s there was a term that was concerned about the potential for the web to lead to a an online fragmentation of fragmentation of online populations. This was because the web was seen as having this narrow casting ability with broadcast TV on you or radio. Everyone hears the same thing. While with the web you can self you can select particular news sources. You could follow particular bloggers. You could only go to particular to particular websites that you're interested in. So this concern led to a concern about fragmentation. And there's a well known image here the divided they blog image. So this is an image of political bloggers in the lead up to the 2004 US presidential election. And each of these dots is a blogger and the lines between them is hyperlinks that they are using to connect to one another. And so the concern was that you can see here that there's a lot of was a very strong clustering on the basis of political identity. Conservatives are red and the blue nodes are liberals. In 2016 concerns about filter bubbles and echo chambers became even more prevalent. Of course the 2016 election was interesting and infamous in lots of different ways. But one of the things that came out of it was an elevation of the concern regarding echo chambers and filter bubbles. So I have an image here of a Twitter map of people who have been identified as Clinton supporters and Trump supporters. And this network is showing how they connect to one another on Twitter. And once again the color coding indicates the political camp that they come from. And what we can see here is there's a divided they tweet phenomenon. There's a very sort of distinct clustering in particular amongst the conservatives. And it was claimed that journalists misread the US presidential election because they were in their own filter bubble. They were in parts of social media that were left or liberal leaning. And so they didn't see the Trump election coming. Some researchers think that echo chambers do not exist. It's been found in research that social media use diversifies users news diet rather than narrowing it. Other research has shown that most people visit politically centrist news sites. So these are news sites that are the political mainstream. Yes, there is a long tail of extremist political websites that exist on the web. But these get very few visitors. And it's also been found that the people who do visit such sites will also visit mainstream political sites. So in this sense, they seem to be getting a balanced diet of information. However, echo chambers might increase the likelihood that a person believes and transmits misinformation through an echo chamber. It's more likely that you're going to be connected to other people who appear to believe the misinformation. So this can lead to social credibility and reinforcement. Secondly, in an echo chamber, you're more likely to be exposed to the misinformation because the people that you know are all forwarding it and sharing it. And this can increase additional exposure can lead to increased likelihood of belief and less likely to be exposed to countering information or countering views. Increased normative pressure to spread the misinformation can also occur in an echo chamber. Your friends might want to know why you haven't shared that video that everyone's been talking about. So what can we do? Well, I guess it's simple for me to say break out of your echo chamber, but that's easier said than done because these are our friends. These are people that we know and people that we like and people that we connect with. I'm not suggesting that you just start randomly making connections to people on social media in order to get out of your echo chamber. However, there are other things that we can seek to do and I guess I'm echoing some of the suggestions that Aaron made. Firstly, it's important to seek out independent peer reviewed information in order to provide a countering view to information that you might be encountering on social media. I also feel that the pandemic has really it's been quite interesting as an expert. I find it interesting to see that the role of experts has some in some ways been elevated. We're interested to see what the talking heads have to say on the news about the pandemic. As much as I may find Norm Swan on the ABC slightly annoying, he's an expert and I'm more likely to believe what he says about the pandemic than some conspiracy video. And then finally, I want to mention that there is this issue of the decline of quality journalism in Australia and elsewhere. And this is to do with the fact that the news media business model has been severely compromised by the Internet. And so this is leading to less resources for quality journalism. And so the 2019 ACCC Digital Platforms Inquiry looked into this. And I think that was also relevant to understanding about how we can find quality information relating to the pandemic. So thank you for your time. Thank you very much Robert. Our next speaker is Jan Orman, a general practitioner and GP services consultant at the Black Dog Institute. She has worked for the last 10 years teaching GPs and allied health professionals about various aspects of the management of mood disorders. Her presentation this evening is entitled swimming pools and doorknobs. Dr Orman, thanks for being with us. It's really nice to be here amongst this company. And in a moment, I'll explain to you why my presentation is entitled swimming pools and doorknobs. But before I start, I'd first like to beg your forgiveness if anything goes wrong technically. I'm in a rural location with a variable internet speed. And if you can't see me, then I hope that you'll be able to hear me through all of my presentation. What I want to do here is talk to you about something that happened in the past. Now, I guess it's prehistory because it's before the internet. And I think you'll find that as I talk about this, you will feel some resonance with what's going on at the moment. If you're old enough, you'll know the answer to this riddle. What do swimming pools and doorknobs have in common? What swimming pools and doorknobs have in common is that once upon a time, many of us thought that we could catch aids from swimming pools and doorknobs and toilet seats and a whole range of other places that were suddenly dangerous as a result of this life-threatening disorder. There were some changes in the way, particularly as clinicians, but also as people in the population. First and foremost is the fact that we had gloves when we were doing anything in relation to collecting blood or examining patients. But there was a whole lot of negative impact of the misinformation as well, including vilification of homosexual people and even simple things like not going to homosexual hairdressers or openly gay waiters in restaurants. So there was a lot of crazy stuff that we'd on in the period before we knew exactly what was going on with AIDS. And that period started in the late 70s. The first case of AIDS, the first person to die in America was a guy called Ken Horne who died in 1981 from proposed psychoma, a complication of AIDS. And it wasn't until 1983 that we started to know something more about how AIDS was happening and just exactly how it was moving from person to person. In that void of information, what did we do? We made things up. So here are some examples of the things that people firmly believed caused AIDS during that period. Some people thought it was just homosexual sex per se or amyl nitrate, which was very popular at that time. Hemophilia or even just blood products on their own got the blame for causing AIDS. Sexual promiscuity, always something good to blame. Malnutrition, contaminated food, poor sanitation and hygiene. And later, when we knew a little more about it, even the antiretroviral drugs got the blame for causing AIDS. In late 1983, the light started to shine on what was going on when human immunodeficiency virus was identified. The blood test came in 1984 and by the end of 1984, enough evidence had accumulated for the scientific community to be firmly of the belief that human immunodeficiency virus was the cause of AIDS. However, some people responded to all of this with conspiracy theories. There's something attractive about conspiracy theories. They simplify issues and make it easier for people to consume and it feeds their paranoia. Was it the CIA that started AIDS? Was it the government that was causing the virus to be spread? Or foreign governments trying to bring down our government? Or the other reaction to the denial denying that it could possibly be true, that it could possibly be a virus that caused such a nasty disease? One of the biggest denialists was a guy called Babel Mbeki. You might have heard of him. You may have heard that he was the president of South Africa between 1999 and 2008. He was a very famous HIV-AIDS connection denialist. In fact, the policies Mbeki's government in South Africa during that period were thought to lead to 330,000 excess deaths in South Africa as a result of there being no HIV testing, none of those drugs available for people to use to treat their HIV because it was believed everywhere as a result of government policy that HIV did not cause AIDS. So you can see that there are strong reverberations and resonances with some of the things that are going on at the moment. We are seeing false and con-series and denialism going on in social media and in the media generally. So remember, I'm a clinician, not an academic, so I can only talk to you about the things that I and my colleagues are seeing in clinical practice. We're seeing community impacts, resistance to prevention measures, for example, by the denialists. And as you're seeing, we're seeing competing agendas driving public policy. What's more important, people's lives or economic survival. That makes it very difficult. In clinical practice, my colleagues and I are also seeing some positive impacts amongst individuals. Surprisingly, we're seeing people with chronic mental health problems actually cope quite well and we're speculating to some extent about why that could be. Because now those people with chronic problems are joined by everybody else in their distress. Or could it be that those people have had lots of help in the past and so have good coping skills to call upon when things go wrong in their lives and in the world? We're seeing some improvement in communication within families. We're seeing building a positive relationships in families. More thought and talk about self-care, which is only a good thing. We're seeing excellent collegiate communication between health professionals and we're seeing a development, as made necessary by working from home, even workplace flexibility. But we're also seeing negative impacts of misinformation. We're seeing people come to see us repeatedly for symptoms that they're afraid might be coronavirus. We're also seeing people not seeking medical attention for other illnesses to fear that they might catch coronavirus if they go to the doctor or the emergency department. We're seeing people who are too anxious to leave the house, who won't let their kids go to school or childcare, or even outside because they're so afraid of the virus as a result of some of the information they've received. We're also seeing falling apart of relationships as a result of often of anxious demands made on family members and the difficulty too of isolation. We're seeing people who are feeling very low and experiencing worsening depression as a result of their concerns about the new world order and how perhaps things are not going to get back to the way they once were. We're also seeing people who've been personally attacked, perhaps as a result of their Asian appearance and are very distressed about the racism that's going on in the community. We're unwilling to change the behaviour at all. These are the denialists who, in fact, are abusing general practitioners for the precautions they're taking. Who knows what's happening when those people are on the streets. So there are a good deal of negative impacts arising directly from the misinformation and the false beliefs that people have about the coronavirus. So when I have a patient who's suffering from, dare I call it, misinformation syndrome in whatever form, the first thing we need to do is talk at great length about the true facts of the situation. And once we've done that, we develop a plan together to help sort the facts from the fiction. That includes limiting exposure to information about the pandemic, not having the news up 24 hours a day on your screen. And even limiting it to just watching the news at night, for example, or just reading the newspaper in the morning. We agree, generally, that they must not trust social media as a source of reliable information. That's not to say they can't read social media. But perhaps they should check anything that they see on social media with more reliable sources of information. And those sources of information include just some of the mainstream media. As Robert pointed out, their quality of journalism is not consistent across the media. Erin mentioned the federal and state-based government websites. I'd like to recommend the WHO website as a terrific source of reliable information. And just for fun, there's the Johns Hopkins University COVID map. I put the URL there. Johns Hopkins University are collating all the statistics about COVID-19 and presenting them in a map form, which is easily accessible and understandable. And consulting that from time to time is a useful thing to do to get your facts straight. Basically, we don't always know what makes us anxious. No, Biggie, pardon. What we don't know makes us anxious. We do know that. To fill a knowledge void and manage anxiety, people are a bit inclined to turn to unreliable sources of information if they don't know, particularly if they don't know how to get to reliable sources of information. There's no doubt about it that misinformation causes harm. And we need to work as clinicians to prevent that harm. And probably as parents too. Thank you. Thanks very much, Jen. Finally, this evening, we will hear from Associate Professor Katie Glass, a mathematical modeler who has been advising government on pandemic preparedness for over 15 years and is working with national teams on modeling COVID-19. Katie's talk this evening is entitled, Mathematical Modeling of COVID-19, The Challenges of Science Communication. Katie, over to you. Thanks, Bruce. So I'm going to take a slightly different tack with this last talk and talk a little bit about what I do as a mathematical modeler and how we can try and communicate this to a general audience. So often when people think about mathematical modelers, they think that they're kind of a crystal ball and that I can predict everything that's going to happen. And so I can tell you what you're going to have for breakfast tomorrow. The reality is that when we use models in this context, particularly when we're looking at a new disease that we don't have a lot of information about, there's a lot of uncertainties. And so we have to rather than predict exactly what we think is going to happen, we have to look at lots of different scenarios and look at the different things that could happen under different circumstances. So to give an example, I'm going to talk through a model that we developed in the early stages of the coronavirus outbreak with a bunch of people that I've pictured here. And the aim of this work was to look at what would happen in Australia if we had an uncontrolled outbreak of coronavirus in Australia. So we're looking at people resenting to healthcare with either mild or severe disease and then either going to GPs or emergency and then having severely ill people admitted towards or to the intensive care unit. And the concern we had in the very early stages when we started using estimates based on what was happening in China was that we were really concerned that there could be a bottleneck at emergency and we'd struggle to have hospitals admit people into hospital where needed and also a real concern that there wouldn't be enough beds in the intensive care unit. Now, two or three months on from doing that work, it now seems pretty blindingly obvious we've seen this happen all around the world. But we first did this work when there were just outbreaks in China and we haven't seen it happen elsewhere. And of course, when we started to see the similar things happening in Italy and then other countries, we realised that there really was a concern if the disease spread unchecked through our population. And so Australia put a lot of measures in place to reduce transmission in the country and to reduce the numbers of people coming inside. And so we helped to avoid a pandemic or an epidemic in the country that really overwhelmed our health system. If I think about how we communicate this, one of the challenges is that if we look at that model now, people say, well, but that didn't happen. Your model must be wrong because our ICUs aren't overwhelmed. And the reality is that the reason they're not overwhelmed is that we took a lot of action based on what we saw from models and from all the information around the world about what the coronavirus outbreak looked like. And the reality, and this is a truth in a lot of public health, is that our aim is often prevention. We're trying to prevent disease happening in the first place. When we're successful there, when prevention works, then nothing happens. So the main aim of our job is to have nothing happen and not see things. Now, because I listened to Erin's talk before, I'm going to get rid of that thought bubble so that you don't remember that, and then you remember the thing at the bottom instead, which is that the aim of public health is prevention. And when we do that well, we don't see disease. What I want to finish with today is talk you through some work that we're doing at the moment, which is looking at how disease spreads in our community and why testing is really important for people with symptoms and what public health does when we find someone with active coronavirus. So the person on my screen at the moment is someone who's just been infected with coronavirus, and we're going to track through the disease in days for them over time. So what we know is initially when someone's infected, they won't likely be showing symptoms and they probably won't be infectious. And there'll be a number of days like that. And then unfortunately, what seems to often happen is that people start being infectious before they show symptoms. This is one of the reasons that coronavirus is quite hard to control because it's hard to stop transmission when you can't see any symptoms. After they start becoming infectious, they will develop symptoms. And then we hope that very shortly after that they'll present to healthcare. So when they present to healthcare, they will be isolated. So if they're very sick, they'll be encouraged to go to hospital. If they're just mildly ill, they'll be encouraged to isolate at home and their test will be taken. And then when they test positive for coronavirus, if they test positive for coronavirus, they'll then be encouraged to isolate for a period of time until we know they won't transmit to anyone else. But what happens in public health as soon as we have that positive test is that people in the public health units will go back and look at all the people that that person had contact with while they were infectious. They'll look at the time they had symptoms and also the period of time before they started showing symptoms, maybe a day before. Now, the number of people they'll have contact with really differ according to who it is. If it was me, I think I'd probably have three or maybe four infectious contacts over the last week because I've been social distancing. I'm still at home. I'm not having a lot of contact. But if this person happened to be someone working in an emergency service or in some field where they're having a lot of contact with people, there might be more people in that contact pool. Now, when the health agencies first contact these people, they're not going to know straight away whether they're infected or not. They can't tell just by talking to someone and they may not yet be showing symptoms. They'll follow them up for a number of days. So in this case, we're assuming that there's two people infected, which is maybe about what would happen on average. And so we'll have a number of days where they don't show symptoms and then they might start showing symptoms. I've also included in this example someone who doesn't ever show symptoms. We do seem to see that occasionally with coronavirus. It's not very common, but some people do seem to be completely asymptomatic. Now, the other thing, an important thing that happens when we have a positive test is that all these people that are contacted are asked to quarantine for a period of time. And if we get to them fast enough, then that quarantine time can include all the period that they're infectious. And so by that stage, we stop any further transmission from them. And so this initial person that could have caused a big outbreak has only infected two people and we've managed to stop the transmission chain at that point. Now, if we hadn't done that, if they hadn't presented to health care when they started feeling symptoms, then, and we hadn't done the contact tracing, then we could well get more transmission from each of these people and then the chain would go on and on and on and we could get quite a big outbreak. So what I want to finish by talking about, so this is work that I've been doing with a whole group of people pictured here. I want to particularly acknowledge Kamalini Lokudje, who's the leftmost face there. So she's been doing these sort of transmission trees for diseases for many, many years. And this is the first thing she does when she deals with an Ebola outbreak is think about the transition tree and how to control it. And so the messaging from the work we're doing and the thing that I want to be your take-home message from today is that really we're encouraging people with symptoms to present to health care. And we know most people are not going to actually have coronavirus. So we saw Josh Frydenberg coughing a lot the other day and he was tested very quickly and found not to have coronavirus. That will be what will happen with most people. But if we want to detect the final hidden cases in the population, we do need to find a few people out there that may still have coronavirus. So rather than getting tested, if you think you do have coronavirus, we're encouraging people to get tested to confirm that they don't have coronavirus. And once they do present, health authorities will then trace the whole tree to try and find anyone that that person has infected. And then by stopping transmission there, we're more confident about releasing social distancing measures and giving us more freedom to go out and about in the world. Thanks very much, Katie. And thank you to all of our speakers for these enlightening and stimulating talks. In the face of COVID-19, information certainly helps us to understand the complexities around us, prepare for action and predict the future. However, misinformation has the power to leave us unready, complacent, anxious, and demoralized. And I think tonight's speakers have provided a deeper understanding of information and how it achieves these paradoxical outcomes. They have uncovered the design properties of human cognition and why we have trouble separating fact from fiction, how information naturally organizes itself on the internet into echo chambers and filter bubbles that can be overly self-reinforcing, how uncertainty can lead to heightened anxiety and false conclusions, and the dynamic nature of mathematical models and the challenges associated with communicating their implications. Once again, thank you for this stimulating conversation. I would like now to take questions from the listening audience. If anyone has questions for our panelists, please send them to us using the Q&A box in Zoom. We've been having some questions coming in and I'd like to start with one from Rachel and Erin. I would like to direct this question to you. Rachel asks, do you think that the media gives too much attention to coverage of false information in an effort to seem unbiased, like what we see in instances of climate change? So that's a really interesting question. So, you know, there does have to be a discussion about facts in an open space. The thing that I think is the issue really here is that it's about pitch. So what I mentioned before is that people tend to get engaged in the bold headline but then move on as they miss the more nuanced detailed explanation about why something is not quite accurate. So I think it's about pitch. It's about engagement. And then there's also an issue with people clicking on things and sharing them with not actually reading them prior to. So there's a bit of behavioural change on the part of the consumer that's required as well if we're going to think about this in a big picture way about how we might address it. I've seen some really neat ways of dealing with coronavirus myths where people have created really sort of fact-based videos and explained something very clearly. And in those circumstances you've got people who are highly engaged and visual cues to help you retrieve the information. So I think that is quite a promising approach. Thanks for that, Erin. Our next question comes from John and I'm going to direct this question to Robert. Robert, John asks, how do we know how large the filter is in a filter bubble? And I'm just curious if there's a way for users to actually diversify what appears in their Google searches, Facebook news, et cetera. Okay, thanks John for the question. So Facebook researchers, see one of the challenges with this is that the information required to answer questions like you're posing, the information is owned by the social media companies and it's difficult to access that information unless you're within the company or you have a contract with them that probably limits the sort of research you can do. But there was a paper that was published which was interesting because the researchers, Facebook researchers were able to compare the impact of the filter, the algorithmic sorting or the filter bubble within Facebook and compare that to the echo chamber effect. So this is the tendency for people to self-select who they're going to be friends with and that influences the information they see. And what these researchers found was that in terms of the impact, they measured the impact in terms of exposure to cross cutting information or cross ideological information. And what they found was that algorithms had a smaller impact on, I guess, reducing people's exposure to cross cutting information compared to this self-selecting effect. So that was, I guess, a convenient finding for Facebook because it meant that they could then say, well, it's not our algorithms doing it, it's actually our users and who they're choosing to be friends with. But just so it's a challenging thing to get data to answer. But I also just wanted to mention about how do you actually diversify the information that you get exposed to? I think with Google searches, it's relatively easy, although Google are very smart and they're very able to know exactly who's using via IP addresses and things. But if you are logged in to your Gmail account, then they know what you're doing and they're tracking it. And you're going to be getting, I guess, a filter bubble effect in that regard. If you're not logged in using your Gmail account, then, or if you clear your browser preferences, there are ways for you to not be tracked and that would therefore reduce the filter bubble effect. Facebook, it's a harder thing because it's completely internal within Facebook. And so I'm afraid I don't have a good suggestion for you on that score. Thanks for your question. Thanks very much, Robert. Our next question comes to Jan and Jan. Ben is asking, people need to have coping mechanisms, even if that means holding on to denial and or anxiety. Would you like to comment on this fact? Ben and I have been secretly having this conversation in the chat box that many people might have already seen. And I do agree in fact that the anxiety and denialism are often part of their coping mechanisms. But as with all harmful coping mechanisms, as with all coping mechanisms, they become problematic when they're harmful either to themselves or others. And the dissemination of misinformation, particularly conspiracy theories and denialism is certainly harmful to others if not to yourself. And that's the point at which people need to be encouraged to develop new coping mechanisms. I'm thinking about people who drink too much in order to cope with their anxiety or low mood. We're not going to encourage them to continue along that path just because it's their coping mechanism of choice. And I think the same thing applies to these pathological coping mechanisms, if you like. Thanks for that, Jan. I have a question coming in from Sally and Katie. I'd like to direct this question to you. Why do you think Australia has been so much more effective in containing the virus than some other countries? What did we get right? Yeah, it's a good question, isn't it? I think probably the board decisions have been quite effective. We've also had pretty good testing levels per capita. Our testing levels have been quite high, so we've been able to detect transmission and follow up on it quite quickly. There has been some discussion about how much seasonality may play a role there as well, that maybe we were lucky it happened in our summer initially. And so perhaps it wasn't spreading quite as effectively in Australia as elsewhere. I'm not suggesting for a moment that it's going to suddenly disappear in the Northern Hemisphere summer. I don't think it'll go that low, but I think perhaps that was another factor that helped us a bit. But we were also pretty prepared for it. So I think in the most part we responded quite effectively. Thanks very much, Katie. Janet, I'd like to come back to you. There is a question here from Rachel. Do you think there'll be any long-term and ongoing differences to how people approach their personal health care? Would the negative or positive impacts seen at the moment be more prominent? I actually think there will be long-term differences in the way people approach their health care. Hopefully they're going to seek reliable information more readily. And one thing that's going to happen is they're going to be better with their hygiene in the future, which can only be said to be a positive. I do think people are probably encouraged by all the talk in the media going to seek help for their mental health difficulties and nowhere to find help for their mental health difficulties more in the future. So these are, I think, particularly positive and self-care outcomes. And maybe, too, the flow-on effect of improving their relationships within their families is going to do good things for their health generally and the way they take care of themselves. So I think that's true. The only negative thing that I'm seeing at the moment that I think is sure to happen is that for some time people are going to be a bit more anxious about interpersonal contacts. And unless that gets out of hand, that's not going to necessarily be a negative thing. Thanks very much, Jen. Erin, another question coming in from Nicholas that I'd like you to tackle. How can I tell, he asks, whether information is true slash reliable? Thank you for that one. That one's tricky for a number of reasons. So simply by reading the information yourself, there are a few sort of cues that the brain uses to assess whether information is accurate. So we tend to ask ourselves, does that seem coherent? Does it come from a credible source? Is there general social consensus for that idea? And does it sort of fit with other things that I know? And those can be very useful strategies in assessing whether information is correct. But also we know from research and cognitive psychology that people can really be tricked by a number of different biases. I talked about familiarity, but also simply sort of putting a decorative photograph with a claim or highlighting something in a particular colour can move around people's impressions of information. So I guess the point I'm making there is that people are very sensitive to the way information is presented to them. So the take home there is, can you just use your brain to read something and establish whether you think it's true or false? Yes, but you need to supplement that by also drawing on other sources of information. And as we've all sort of discussed throughout the webinar today, I think the best approach there is to find some information channels that you can trust, read information yourself, but then also make sure that you're drawing on credible sites. Thanks for that, Erin. Katie, we have a question here from Mahin that I think would be best directed towards you. Mahin asks, I'm wondering if networks that are used for modelling, I'm wondering if networks are used for modelling. I understand that the network that is people who've been contact with the sixth person, for example, are used to prevent the spread of the virus, but are they being considered in models or can data from this pandemic be used to improve the models for future? Yes, absolutely. There's lots of approaches. So prior, well prior to the pandemic, there've been a lot of studies where they've gone out and asked people to track how many contacts they have over a period of time and how close the contact is. Do they have physical contact or not? How long were they within a certain range of each other? What age were the person they had contact with? And a lot of that data is used in the models that we use today. We see that mostly people have contact with people about the same age as them, but there's a fair bit of contact between people and their children and some contact between people and their grandchildren. So they have a big different age contacts. But yeah, we use all that data all the time. In terms of whether we can use data from the coronavirus outbreak perhaps, I think most of us don't have as much time to actually be gathering data at this stage. We're trying to use what's already there to inform policy. I don't think, for example, the COVID that the government's produced will be, I think the privacy means that the data from that can't be used for modeling, but certainly this go for it if there was data collection. Thanks very much, Katie. I think we have time for one more question. I'm going to direct this to Jan. Jan, just asking you to be mindful that we are close to finishing, but there's an important question here from Risha. They ask how misinformation can play a role in developing stigma and stereotypes against a particular group or community and what might a lay person do to protect themselves in this regard? I think this is a really good question and a source of great distress for many people in the community. I think misinformation around particular groups is actually being used for political gain at the moment. The coronavirus is referred to as the Chinese virus in various quarters and that has led to particularly nasty stuff going on for people of Asian appearance, as I mentioned earlier. I'm not sure what the answer is to your question about how to protect yourself or how to protect oneself from that sort of bias or racism in the community, because I think it's so painful and so tragic that it's happening. I'm wondering if the clinical psychologists on the panel have got anything to say about how one might protect themselves from that kind of situation. That is a very interesting question and I'm happy to bring panelists in, but I'm also mindful of time, Jan, so perhaps we'll leave it there and bring our webinar to a close. Thank you, everyone, for these excellent questions. Thank you for your interest and engagement in tonight's webinar. This does bring the evening to a close. May I also note that as attendees, you will receive an evaluation survey so that you can give feedback on the webinar. If I can encourage you to complete this survey, it would be very much appreciated. Also, I'd like to remind you that we will be presenting a second webinar next Thursday at 8 p.m. focusing on social emotional and social tools that promote mental well-being during crises such as this one. In closing, I would like to thank the College of Health and Medicine at the Australian National University, the Black Dog Institute, and the many staff at these organizations who have helped organize tonight's webinar. And finally, I'd like to thank the many communities and the members of these communities in Australia who are finding ways to help and support those of us most impacted by COVID-19. We extend our wishes for a healthy and happy week. Good night.