 So today we're going to be talking about how we can use behavioural insights to understand device decisions. So I'm just going to start with a quick quiz. What do the following have in common? Choosing between hearing aids and toilet paper stockpiling. Now it might seem like on first glance that these two things don't have a lot in common but in fact both of them, at least in part, can be understood by behavioural insights. And we're going to talk a little bit about that today. So behavioural insights includes the fields of behavioural economics and it seeks to understand what drives behaviour and why we make the decisions we do and the factors that influence those decisions. We tend to like to think of ourselves as fairly rational creatures. We look at all of the options available to us. We carefully weigh up pros and cons of all of those options and we use this information to calculate what's the best decision for us. But as we know humans aren't actually robots. And while we like to think of ourselves as rational, often we make decisions and we see others making decisions that perhaps could be considered irrational. But often while they look to be irrational, they're resulting from ways of thinking that are actually very predictable and BI seeks to understand how those different factors influence our thinking. So there's a huge number of principles of biases that explain our behaviours. For example, humans are really motivated to avoid loss. So we fear loss and we fear it such that we will actually expend more energy to avoid a loss than we will to get a gain of the same or equivalent value. We also are fairly influenced by those people around us, especially if those people seem similar to us. We like to think of ourselves as fairly rational information processes but our information is coloured by our beliefs and our biases. And it's also influenced by the way that that information is presented or who presents that information to us. So behavioural insights considers our thinking as resulting from two different systems. On the one hand, there's a system that is slow, considered, rational, it's the system that we tend to think of when we think about ourselves thinking about a decision. And it's where we like to think that most of our decisions take place. And this system too, as it's called, is where we like to expend our energy if we've got a really difficult decision to make. But it's fairly resource intensive and most of the time it prefers to hand over to system one. System one is a lot more fast, automatic and intuitive. And most of the time this kind of different balance of duties shared between the two systems works pretty well. But system one is a little bit prone to going for answers that are intuitive, coloured by biases. And so the results from these two different systems don't always line up the way that we expect. So BI seeks to understand not only those influences on decision making, but how we can use these to engage behaviour. So when is BI useful? As for many things at the moment, it's all about this very familiar curve. Let's consider a target behaviour that varies in how much engagement people may expend on it. That's along our x-axis, from low engagement on the left to high engagement on the right. If we think about participation, we can see that it could vary with the amount of engagement people have with the target behaviour. And on the left, we know that there's going to be people that no matter how much support or motivation we try to provide, really are just not interested in that behaviour and engaging. And no matter what we do, they just say nay. We've also got a group sometimes on the right who are really motivated regardless of whether they have been given much support or motivation. And that's really good because that's a group that often we don't need to worry too much about because they're motivated enough intrinsically to carry out the behaviour. The group that the BI teams are usually interested in is this group in the centre. This group in the centre aren't necessarily averse to participating in the behaviour but maybe they just haven't had sufficient support or opportunity yet to engage. And so BI's actually wants to flatten this curve to try and help those who are in this middle group that aren't participating but want to to move to fuller engagement with the target behaviour. So flattening the curve means understanding that for some people, a gap exists between what they intend to do and what they actually do. For example, rational me when I go to the shops have, I have no intention when I'm leaving the house of buying any toilet paper. I know that I have stocks in the house. I'm in no danger of running out. I'm not really concerned that there's going to be a worldwide shortage in the coming weeks. And so I really have best of intentions not to engage in any panic buying that I see reported on the news. But when I reach the store, my feeling of anxiety might be a little bit more heightened. And when I'm surrounded by other people picking up toilet paper from a newly stock shelf, my fear of missing out or looking at what those people around me are doing may actually make me a little bit prone to swaying from that intention and maybe carrying out a behaviour that I didn't intend to do. So BI's all about looking at what sort of things causes this gap between intention and action. And this is the context of where the behaviour occurs and the biases and difficulties that occur. BI is not about forcing people into action or introducing onerous fines to stop them doing behaviours that we don't want them to do. Rather, it's about building a bridge. We want to help people who actually intend to act to act in the way that they wish to. So where can BI be used for hearing health decisions? We know people say that hearing is important to them, but we also know that this doesn't always translate into action. We know that more people need help than make the decision to seek it. We know that more people would benefit from hearing aids than choose to get fitted. And we also know that many people are choosing hearing aids that may be not the best ones for their hearing needs. And is this decision we want to talk to you about today? So for a little bit of context, we're working with clinicians in Australia who are having conversations with first-time clients with aidable loss who are seeking help under the Hearing Services program. Within this program, clients can choose between an entry-level basic device, which is fully subsidised, or they can choose to contribute their own money towards a higher-level device, which is partially subsidised. And this is the decision that clinicians and first-time clients were working through. So we said before that we're looking for gaps between intention and action. So where are the choice gaps? When we asked clinicians about higher-level devices for clients, they told us that they really believe that those high-level devices provide many superior experiences to the entry-level devices. And in fact, they believe that 65% of their clients would benefit from these added benefits. So the clinician's intention was to have a supportive discussion with all of their clients who they believed would benefit. But they also told us that those conversations were often difficult and they believed that maybe only about 41% of clients were actually interested. Further, the clinicians almost so noticed that there was a gap between those clients who were interested and intended to go ahead with higher-level devices and those who actually did proceed. They saw a gap between these two groups. There was a concern that clients were missing out on the benefits that high-level devices could provide them, not because of the lack of interest, but because of behavioural barriers. So the focus of this work is really to close those gaps to improve device discussions and also to help clients make an informed choice, choosing the level of technology that is ripe for their circumstances. And we did this work using a BI research process. We started by learning about the context in which the decisions were taking place in the background. And then a big part of this, though, was understanding the beliefs and the experiences that make up that context in which decision-making occurs. So one of the behavioural insights we looked at was confirmation bias. So why are beliefs important? The world is full of objective facts and personal beliefs. And when we bring them together, they don't really always line up together. We tend to focus on things that confirm our beliefs. And we tend to ignore all of those objective facts out there in the world that maybe don't confirm our beliefs. So we view the world through the lens of our personal beliefs. We interpret information through this and we respond to the world in kind. So what were the beliefs at play? One of the questions that we asked both clinicians and clients was their beliefs around the benefits of high-level devices compared to these more basic entry-level devices. And once again, the clinicians responded and told us that they believed these high-level devices were significantly better for their clients than the entry-level devices. But they again told us that, look, we think that our clients don't believe this quite the same way and they're not as convinced. Interestingly, though, when we asked our clients, they said that they actually did see benefits of high-level devices compared to entry-level devices. But the really interesting finding came when we asked clients to tell us about what their clinicians believed and to reflect on this. And clients said that they thought the clinicians weren't quite as convinced as they were about the benefits of entry-level devices. So this is an interesting mismatch and why does it occur? What's happening in these conversations? Well, we said before that clinicians believed that the high-level devices would be something the clients would benefit from, but they also believed that clients weren't interested in having these discussions about these devices. So when confronted by client questions, they were at risk of interpreting them through this lens and confirmation bias starts to pay a part. As a result, when confronted with questions about from clients about the different devices, clinicians tended to interpret these as resistance on the client's part. And they responded by reassuring them that entry-level devices would be fine for their needs. Clients, in turn, responded, as you might expect, by taking their clinician's advice. And we see the loop closing where the clinician is, has their bias confirmed? Has their belief confirmed that, yes, the clients weren't interested in those high-level devices? And this loop we refer to as a self-fulfilling prophecy. These things that we expect to be true, the beliefs that we hold, actually are driving the behaviors and fulfilling what we believe to see, that we expect to see. Now, I should note that there's a number of specific beliefs, which the clinicians held about why clients wouldn't be interested in discussions, would be resistance to these discussions. And we found that when the questions from the clients matched up with these sort of triggering beliefs, the confirmation bias and the self-fulfilling prophecy was even more likely to occur. So how about in the choice of making a choice about devices itself? What influences decision-making? Let's give a little bit of a sticky example. So the scene is your local supermarket and on offer as a big display of jam. Now, on day one, the marketers set up their stall, providing a limited choice of six flavors and they get a bit of interest. On day two, they see that they have some more stocks out the back and they actually put out a lot more flavors of jam. And as you might expect, people love choice, even more crowds flock to the stalls to sample the jam. So the question to you is, given these two scenarios, which day will customers be most likely to buy jam? We know that people love choice, we know that that interests them and attracts them. But what the researchers found was that actually on the days where there were more limited flavors, the sales of jam actually were much, much higher on the days than there was lots and lots of choice. And they explained this in relation to something called choice overload. Having too many choices overloads the system. The fear of losing, in this case, choosing the wrong jam is actually really, really overwhelming and decision-making gets hard. This example probably will ring true to anybody who has taken a small child to an ice cream parlor that has more than about three flavors. The decision-making is incredibly difficult as you can see. So what causes choice overload? Remember this careful but resource-intensive system two. Choice overload makes thinking pretty hard and realistically system two is a little bit lazy. And so there's various things that can contribute to putting system two off the task. If we need to make multiple decisions, the system becomes increasingly fatigued and overloaded. If the perceived size of the choice makes the decision a difficult one, if it's perceived to be a big consequence, there is more likelihood of overload occurring and the complexity of the choice. If there are multiple options or attributes that are difficult to compare or if there's not an easy, salient, obvious choice, choice becomes more difficult. In this situation, system two finds thinking really hard and is looking to system one to help out. The problem is that system one has three main drives. Avoid loss, avoid loss, avoid loss. System one isn't really designed to cope with complex decisions, but it has a number of shortcuts and all ways of dealing with difficult situations. And in this situation where choice is overloading the systems, system one reverts to a no-choice response. Now, this might look like delegating a choice to somebody else. If we can delegate the choice, the risk of us making the wrong decision is lowered because we can blame it on somebody else. It might look like going with some sort of short path, a safe or no-risk option, maybe a default, something that we've chosen before. Or it may even mean walking away from the decision altogether. We can't regret a decision if we don't make it. And so choice overload can lead to a situation where people are not making informed choices but are in fact just trying to minimise their feelings of regret from making a bad choice. So it was principles like these that we identified as part of our research process. Having identified some of the general principles, we were looking at the specific situations in which they occurred and looking for specific interventions to address them. We tested these with a group of pilot clinicians and rolled them out more broadly following that pilot. And the big question I guess from everybody's wants to know is can those interventions actually make a difference? And what we found was, yes, they could. We found that addressing some of the biases that we saw in appointments really increased clinicians' confidence with the things that they previously reported as difficult. They found themselves much more confident minimising bias, discussing device choices and discussing price. And they also said that those interactions, they felt to be more meaningful and more comfortable. They found the more positive conversations helping their clients to make an informed choice. We also found during the pilot that having those better conversations also led to clients choosing a greater number of high level devices. So in the pre-intervention period compared to the pilot intervention period, we actually saw a 50% increase or greater than 50% increase in the number and proportion of the proportion of clients who are going ahead with high level devices as compared to before. And the most important people in the room often are the clients, of course. And the good news is that we also saw happy clients to give leave them with the last word. The clients felt a lot more comfortable within the conversations. They felt that they were being informed but not pressured to make a decision. And this was really important. I just wanna thank to, give thanks to all of our BI researchers who've worked on this and other projects in the now BI research group and who have helped with the research.