 Hi everyone. Thank you for joining the Judy Slee psychology seminar series. Just a few housekeeping rules. Just a reminder that this seminar is being recorded and will be available on the psychology events page and YouTube and will also be sent via email. I'll just also pop in there that we've been having some issues with our YouTube account but as soon as it's all available it'll be all sent out. So upon entry of the webinar you've all been muted and we ask that you please stay muted for the duration of this seminar. If you have any questions you can write them in the question box below at any time and we'll have question time at the end. Thank you and I'll hand off to Aaron. Thanks Mikaela. Hi everybody thanks for coming again to our Judy Slee seminar. So I'll start with an acknowledgement of country. So we acknowledge the Nanawal people who are the traditional custodians of the land where the ANU sits and pay our respects to elders past present and emerging and extend that respect to any Aboriginal and Torres Strait Islander people that are joining us today. So I also just wanted to flag something with everybody that I'm looking for someone to take over the convener role for these seminars because I've switched to part-time with my PhD now so any students that would be interested in in doing that it basically just involves organizing the speakers for each week and you do that at the start of the semester. Supervisors if you want to have a chat with your students about that and see whether that's something that they might be interested in it can be any year student probably a second year or above might be best just because they'll know more people around the department but anyone who's keen to take it on can send me an email and let me know. So we will have a short seminar again today our second presenter had to withdraw unfortunately so we will just have Nicole Tan presenting today. So Nicole's a first-year PhD student who completed her honours last year. Her PhD research looks at the belief updating system in anxious individuals using Bayesian computational approaches. So her talk today is about using a Bayesian graphical model to understand the associations between the jumping to conclusions, cognitive bias and anxiety. So I'll let Nicole take it away and we'll have some time for questions at the end. Thanks Erin. So let me just share my screen just one second. There we go. So hello everyone my name is Nicole and I'm a first-year PhD student as Erin just introduced and I'm supervised by Claudie, Bruce and Junwen. Today I will be presenting my work on Bayesian graphical models of belief updating and anxiety. A belief like this may be present in mentally healthy individuals but such a belief is much more common to highly anxious individuals. These beliefs are known as irrational beliefs because they do not correspond to the true probability and state of events but the events themselves are not impossible. When presented with evidence that contradicts their irrational beliefs the anxious individuals do not tend to revise their beliefs to match the actual state of the world. This tendency has been widely discussed in several cognitive models of anxiety and the common theme surrounding these models is that irrational beliefs stem from a heightened perception towards threat-related information. Despite the rich literature there seems to be still a lack of systematic research on why anxious individuals are less likely to be able to update garage beliefs and what exactly contributes to that bias. So that leads to my PhD research question. The first intuition is that anxious individuals may have a different belief updating system compared to non-clinically anxious individuals. Before I jump to my research methods I think it is important to briefly talk about some rational models of belief updating. Throughout my research search literature search so far I found two rational models namely the agent model and the Bayesian model. The agent model is named after its creators, Ocarin, Gardenforce and Mackinson. This model is mainly applied in artificial intelligence. This model also assumes that belief is a set of sentences that follow some logical consequences. The emphasis is that belief revision follows minimal changes and therefore results in minimal loss of information. So this model outlines the processes in belief revision that is how agents make use of new information to restructure double leaf sets in order to maintain a consistency in the belief system. So using the agent model we can explore which process in belief revision that is being overused or underused by clinically anxious individuals and therefore being able to explain the biases in their beliefs. However the agent model has not been applied in psychology even though there has been some theoretical push for it. Another drawback is that this model assumes that all beliefs can be expressed semantically and that beliefs are all or nothing. While this assumption may hold for artificially intelligent agents we humans do not have the perfect competency in logic and leader do we have infinite brain resources. So this rational model may not be an ideal option to study beliefs in humans. Then we have the Bayesian model. The Bayesian model assumes that humans infer which hypothesis that best describes the world. So belief revision or updating is a process of incorporating new evidence with the individual's pirate knowledge about an event and thereby producing an updated belief known as the post-erebelief. In Bayesian models belief are often expressed in terms of probabilities. This model has been adopted in a wide variety of psychological research including learning, reasoning, causal inference and much more. But just like the agent model the Bayesian model faces a few criticisms. For example some researchers have argued that the Bayesian models are un-fossifiable because researchers can play around with the likelihood and barriers to make a data like Bayesian. A way to tackle this is to do model comparison that is by comparing the goodness of fit between for example Bayesian belief updating and Frequencies belief updating to see which model is better at explaining the data. Another classic criticism is that humans are not naturally good statisticians and therefore when we make decisions we often deviate from the Bayesian model. However there are some studies which have demonstrated that some clinical populations including patients with delusions were actually more Bayesian compared to the normal population. In fact the healthy contraries were considered over cautious in their reasoning and belief updating. While this model has been heavily adopted in the research of delusion there has been relatively little application for the anxiety population. So this is why I have chosen to use the Bayesian model as a rational belief updating framework in my research. So in terms of measuring belief updating I will be using a classic paradigm known as the BIST task. In this task participants are shown two jobs. For example JAR-A contains 85 rate bits and 15 blue bits in JAR-B would have the reverse proportion of bits. There have been a few variants of the bits ratio in literature as shown in this table. So during the task participants are told that the experimenter will choose one of the jars behind a screen so that participants can't see which jar is being chosen and either jar is likely to be chosen. Then the experimenter would draw a bit out of the chosen jar and the bits are drawn with replacement. This seemingly random sequence of bits is actually predetermined. So the task of the participants is to determine the source of bits being drawn so either from jar A or jar B. After each time a single bit is drawn participants will have to decide whether they would want to see more bits or that they have already decided the source of bits. Usually the maximum number of bits that participants can observe is 20 pieces. The common dependent variable measured in this task is the number of bits observed to make a decision or jumping to conclusion bias. So the jumping to conclusion bias is defined as the tendency to decide after seeing two or fewer bits. This reasoning bias has been consistently observed in clinical delusion. Interestingly some researchers have also been able to observe this bias in clinical anxiety but the findings are inconsistent. This contention stems from a lack of clear mechanism that illustrates how anxiety interacts with the belief updating system. Well, relatively low decision threshold and overrating of evidence can contribute to this bias. However, as these variables are often difficult to be measured explicitly, they remain relatively unexplored in the anxiety population. Therefore, one of my research goals is to propose a model of belief updating which focuses on the influence of anxiety on both evidence weights and decision threshold and today I'll be presenting the evidence waste model. Here I propose two versions of the Bayesian graphical model based on the classic bits task. The first version is the simpler probability judgment model whereby participants report their perceived probability that a given bit sequence comes from one jar rather than the other. The bits with the color that appears most frequently in a given sequence will be referred to as the dominant bits. So the bits with the color that appears least frequently are the secondary bits. In this graphical model and the following one, you will see several types of notes. So the shaded notes here are the observed variables, so variables that we can measure in the bis task and the unshaded notes here represent unobserved variables. Square notes like this represent discrete variables and round notes like this are continuous variables. Single-etched notes like this are known as the stochastic variables which are variables assigned with a distribution like this and double-etched notes are deterministic variables, so they are fully determined by other parameters in the model. In our models, we also make some assumptions about the underlying distributions and we acknowledge that there are other distributions that may better describe the parameters and refining the models with more appropriate distributions is currently one of my goals. So suppose evidence weights are computed following the formula here just like a regression equation. The major component is the influence of anxiety which is assumed to follow a normal distribution. The average evidence weights for dominant and secondary bits as noted by AD and AS are assumed to follow a local normal distribution with a mean of one. This implies that at the population level, on average, individuals have equal weighting for all bits observed. Any deviations of evidence weights from the population level comes from the influence of anxiety. The note A here represents trait anxiety level which differs from participant to participant. So evidence weights are assumed to influence participants' probability judgment through their prior and posterior beliefs. In our model, posterior beliefs follow a contemporary version of the Bayes theorem. For example, the posterior beliefs that the bits comes from jar A depends on participants' prior belief, the number of red and blue bit observed for that particular trial, the objective likelihood that bits come from jar A and jar B that is the known bits ratio for both jars, and the evidence weights assigned to the observed B. The final observed note in this model is participants' reported probability judgment as noted by PR. PR for now is assumed to follow a normal distribution with a mean of normalized posterior beliefs about the source of bits and also a precision parameter. Precision parameter is the inverse of variance, so the beta to precision value, this model of variance. And here is the second version which is an extended model. Everything remains the same but instead of reporting perceived probability, participants will report their three choice decision about the source of bits. So their decision could either be that bit sequence comes from jar A or that it comes from jar B or that they would like to see more bits before deciding. Participants' posterior beliefs contribute to the accumulation of evidence which is assumed to be the log of odds that bits come from one jar rather than the other. This accumulation of evidence is compared against participants' information threshold which we control at the log of base factor of three. So this means that when participants have observed substantial evidence supporting a hypothesis, they would be able to make a decision. For example, when the accumulation of evidence for jar A exists log of three, participants would decide that the observed bit sequence comes from jar A. The same logic applies to jar B. If the accumulated evidence does not exit participants' threshold, then participants would request to see more bits until the accumulated evidence assists the threshold. So that was my proposal of Bayesian graphical models based on the classic this task which we can use to understand how anxiety influences belief updating through evidence weights. Now I'd like to talk a bit about some of the challenges that I'm facing and also future extension of those models. So I guess the biggest challenge right now is to assign an appropriate distribution for the perceived probability about the source of bits. Currently, as presented, I'm using a very basic normal distribution but we know that it is less appropriate if not inappropriate at all because probability judgments for later trials in the BIST task tend to be skewed in one direction which implies that participants' confidence about the source of bits will be very high as they observe more and more bits. I have also tried using beta distribution which intuitively seems to be more appropriate due to the lower boundary of zero and upper boundary of one in the data. But when I introduce random noise in the data, the beta distribution does not seem to make sense anymore because again, the data tend to be so skewed that I can't introduce higher random noise in the data for the later trials. Another challenge is to introduce noise in the decision model. Currently, my decision model is noise free which would not be ideal to be used for modeling real-life data. On the brighter side, once the distributions are fixed, I can start running simulation studies to assess my model properties. Some of the conditions to test including increasing between individual differences, varying sample sizes and repetition of plots, as well as finding out whether the model is better at estimating high bias in belief updating or a low bias. So once these models are ready, I can collect real experimental data from actual participants to see how my models behave. If these models are successful, then they will allow for an understanding on how individuals with high and low anxiety assign evidence weights and therefore how that will influence their decision making. The models will also allow us to compare the deviations from rationality in terms of belief updating for individuals with high and low anxiety. So that is all from me today and thank you so much for listening. Thanks so much, Nicole. That was a great presentation, really clear. And yeah, for people like me who don't understand stats that well, I managed to follow along very easily, so that was good. For people that have no questions for Nicole, you can pop them in the Q&A or the chat and then Nicole can read them straight from there. I just see that someone's got their hand up. Yep, I think your microphone's been turned on now. Okay. Yeah, I'm Mike. Sorry about that. I've only got my U number in there. Nicole, I'm a little surprised that if you're having trouble with the beta distribution, you haven't come to consult what frankly is one of the world's experts on beta regression. Jay Brack Island and I introduced this to psychology and I could actually help you a lot with the model that involves the beta distribution and introducing random noise into it. The beta distribution has a mean and a standard deviation just like the normal distribution does, but it has to be re-parameterized in such a way as to be able to introduce random noise into it. So I don't think that's going to be a huge problem for you. We can discuss that. Going back to your normal model, probability cannot possibly follow a normal distribution. This is not just a matter of intuition. Normal distribution support is the entire real line Normal random variables can take negative values, positive values of the magnitude you want. Probabilities are stuck between 0 and 1. That's what you can't get away from that. What you could do if you want to use a normal model to compare it with a beta model is you could take the log of the odds. So you take your probability divided by 1 minus the probability to take the log of that and that transforms your PO variable onto something that occupies the entire real line and you can apply a normal distribution to that. That could be really interesting to do. It won't require a big modification of your model at all. It just requires that transformation in order to be able to do it. And then thirdly, I would suggest that you compare what your subject is doing with the standard Hazian model for updating on the basis of receiving frequencies one at a time about looking at the marbles and trying to decide whether they come from one jar versus the other. The standard Bayesian model for that is a Bayesian updating procedure. Again, really, really straightforward. You could that way compare what your subject is doing directly with what a truly rational Bayesian updater would be doing. So if you combine those things together, I think you'll be all on your way. Yeah. Thanks, Mike. You give me hope in terms of modelling. I was a bit hopeless because I know that normal distribution is not right because it can go negative. But I was having so much trouble with the beta distribution as well. And I think I've made the right choice to invite you to this. Thanks for that, Mike. So other questions for Nicole? You can raise your hand so we can turn your microphone on or you can just put them in the chat for her to read out. Well, I've got one more. When you list the limitations to the Bayesian updating model, there are a couple of others that would be worth considering as well. Because if you've got savvy examiners of your PhD thesis, they might raise these. So you're better off raising them first and then kind of getting rid of them to anticipate that. Bayesian updating models have been widely criticized on two grounds in addition to the ones you've listed. One is that a single probability as a degree of belief cannot represent ignorance. Right? So suppose I'm starting off with, you know, with the jars and let's suppose, again, let's suppose I'm a rational agent, but as a rational agent, I want to express that I have no idea what the probability that this stuff is going to come from jar A versus jar B is. My justifiable position is that my personal probability is anywhere between zero and one. I have no reason to settle on a particular probability. And in fact, as a rational agent, I would view with alarm anybody who would insist. I should give a precise probability. Give me an example. Right now, each of us has a precise weight. Do any of us know what that is? No, we do not. I therefore assume that inside our heads somewhere is a precise degree of belief that we have access to. That just doesn't make sense. If I think about, you know, how likely do I think it is it's going to rain tomorrow between 0.6 and 0.8? I'm not sitting here going 0.7325914, right? So that's one quite dominant criticism of Bayesian models as they insist on precise beliefs. And the second problem is that let's suppose you're highly anxious person who believes that if they go out to eat alone, they're going to be people are going to laugh at them and they, and they, and the way we regard as a loser. Suppose they believe that is a certainty. Probability one. Now, as a Bayesian, you can't update from that. You can call the evidence you want, but as a Bayesian, will you want to shift away from one? Conversely, as a Bayesian, if your prior degree of belief is flat zero, same problem updating, won't do any good. You won't shift away from zero. So Bayesian approaches to representing belief cannot deal with certainty. This is not irrational according to Bayesian models. If they start off by saying, I'm absolutely certain that when I go out to eat alone, people are going to laugh at me. If that's where they are, then, then they're perfectly rational and staying there, no matter how much evidence you present them, according to a Bayesian belief view. Now, we all know that, that that's actually not a sensible thing to do. Even if a person declares they're absolutely certain about something, they should still be movable if you present them with enough evidence. But Bayesian updating procedure does not allow for that. And that's a really crucial flaw. Yeah, thanks for reminding me of that, Mike. Thanks, Mike, for giving us all of that information. I think it actually really highlights one of the key things that I hope for students to get out of these presentations, which is that they can present the challenges that they're having and kind of get more information from the collective mind of RSP and maybe get to talk to people that they may not otherwise get time to chat to or just get kind of different opinions from people. And I think that that can be one of the advantages of presenting earlier in your PhD rather than waiting until you've done three studies is that you can get some of this guidance really early on. So I hope that Nicole's presentation today has given some of the earlier PhD students confidence that they can present in this setting and that it will go well and no one's going to be mean to you. And that you can get some really useful information from doing that. Are there any other questions that anyone has for Nicole today? I have a question. Nicole, I want to say that was excellent. I really enjoyed your presentation and I echo Erin's comments around quite a complicated topic for many of us and that you explained it really well. So my question is not actually about the research itself, but actually if you have any advice for PhD students or any other students who are facing some statistical complexities and how you've approached that and what you've found helpful in learning quite complicated models for your research. Thanks for that, Kristin. I think I'll just share my experience. So I have absolutely no experience in coding with R with computational model. I started picking it up at the start of my PhD. So that was like about 11 months ago. And when I was preparing for a conference for on the same topic, I was freaking out because I thought my model is not perfect. No one's going to love my model. It's just a bunch of crap. But I guess like you should talk like for those PhD students or my colleagues who are having that kind of low confidence and doubting themselves like should always talk to your supervisors because they are absolutely amazing and lovely. And because of their support, I decided to present in the conference and I loved it. I enjoyed it so much. So you can dub yourself, but you know, be nice to yourself and talk to your supervisors when you're in doubt. And also always Google before asking. Google is your best friend, even when you're in your PhD. Yeah. Thank you very much, Nicole. Okay. Looks like we don't have any other questions for Nicole that I can see. So yes, thank you again, Nicole, for sharing your research with us today and for doing so early in your PhD. I think that's great advice to kind of talk to your supervisors and to put yourself out there and try out these things and see how they go, even if you are having some doubts yourself. Everybody who's presented at these seminars has told me afterwards how helpful it's been for their research and for their confidence and their presentation skills. So yeah, I really hope that other students feel inspired by you and keen to give it a go. So as I said, if anyone else also is interested in taking on the convener role, let me know. It'd be great if we could do kind of like a slow hand over so I can show you what's involved with that before the end of the year. So feel free to send me an email or you can contact Kristen or and she can pass you on to me. Yeah. Thanks again, everybody for coming along today. And I'll let you let everybody go early and have their lunch. And I'm not sure what we have on for next week. Kristen, you might know. Yes, I do. Yes. So we as a spoiler alert for next week, we have four of our brand new staff in the social psychology area here in the Research School of Psychology, who are going to dazzle us with their excellent research in social issues in a range of different areas. So it's a social theme for next week. And we're really looking forward to hearing from our new colleagues. So have a wonderful day. Thank you for joining us and great talk, Nicole. So have a great rest of the week, everyone.