 Welcome everyone to the design for cognitive bias using mental shortcuts for good instead of evil session by David Thomas, founder David Dylan Thomas LLC. A quick intro about David David Dylan Thomas is the author of design of cognitive bias creator and host of the cognitive bias podcast and a 20 year practitioner of content strategy and UX has consulted major clients in entertainment. Okay, publishing finance and retail. You're glad David Dylan can join us today over to you David. Thank you so much and thank you everyone for joining today. So today we're going to talk about design for cognitive bias using mental shortcuts for good instead of evil. I am David Dylan Thomas, and I do run a company where I go around basically and get people excited about and give them tools for more inclusive design. And as you just heard I wrote this book design for cognitive bias came out last year, and the road to this book began with a podcast they used to do called the cognitive bias podcast. Now the road to the podcast begins with this woman her name is Iris Bonnet, and she gave a talk once called gender equality by design. Fantastic talk you can find it on YouTube, highly recommend you check it out. And in the talk she makes the point that a lot of implicit racial or gender bias can often come back to something as simple as pattern recognition. And the idea is that let's say maybe you're hiring a web developer and the image that might pop into your head when I say web developer might be skinny white dude. Not because you think that men are better at programming than women far from it but the pattern that you may have grown up with looking at movies, television, maybe even offices you've worked in might start to make that equation. Now, if you see a name at the top of a resume that doesn't quite fit that pattern. All of a sudden you're giving that resume the side eye. So when I saw that something as, you know, terrible as racial or gender bias could sometimes come back to something as simple and dare I say human as pattern recognition. I decided I need to learn everything I possibly could about cognitive bias. And so I did. This is the rational wiki page for cognitive biases there are several, you know, 100 different biases here, and I realized I'm not going to figure this out in a day. So I basically took one a day. And then the next day I move on to the next one. And this inevitably turned me into the guy who wouldn't shut up about cognitive bias. And so my friends were like Dave please just to get a podcast. Now, it's worth establishing from the jump what is cognitive bias. And at the end of the day it's a series of web, sorry, a series of shortcuts that your mind is taking just to get through the day. We have to make something like a trillion decisions every single day, even right now I'm making decisions about where to look what to do with my hands how fast to talk. And if I thought carefully about every single one of those decisions I'd never get anything done. So it's actually a good thing that a lot of our lives are spent on autopilot. The problem is sometimes the autopilot gets it wrong. And they call those errors cognitive biases. So here's a fun one it's called illusion of control. And it happens when you are playing a game where you have to roll a die. If you need a really high number, you tend to roll the die really hard. If you need a lower number, you tend to roll it pretty gently. And even though it makes no difference how hard you roll the die, but in situations where we have no control, we like to feel like we have control, and we embody that by how we roll the die. Now this one is not so fun it's called confirmation bias. And it's when you get an idea stuck in your head, and you really only look for evidence to support that idea. And if you ever see evidence that doesn't support the idea. So take news and you move on a really powerful example this came during the lead up to the Iraq war. The coalition ally story was that Saddam Hussein weapons of mass destruction and we needed to get in there before he gets us and it seemed like a very compelling argument. As it turns out, not so much with weapons of mass destruction. And within a year of the war starting the president of the United States who had insisted there were weapons of mass destruction there. We didn't find anything. Regardless, the number of people who believed that there were weapons of mass destruction there stayed very high. So, so much so that even 14 years later, here in the states Republicans, over 50% of Republicans thought that there were weapons of mass destruction there, and over 30% of Democrats. So this is an extremely powerful bias and we will be coming back to it. So these biases are pretty tough to fight. Part of the problem is you may not even realize you have bias. There's literally a bias blind spot where you think you don't have any biases but you're sure that everybody else does. Part of the problem is about 95% of cognition is happening below the threshold of conscious thought right so you're making these decisions so quickly, you don't even realize you've made them. So the next time somebody asks you why you did something, the most honest answer you can give is how the hell should I know. Even if you know about the bias, you'll probably still do it anyway. So there is a bias called anchoring. And the way it works is I can ask everyone watching this to write down the last two numbers of your phone number. And then I could say, okay, we're going to bid on this bottle of wine. Those of you who wrote down a lower number are probably going to bid lower. Those of you who wrote down a higher number, probably going to bid higher. It's called anchoring, it's a thing. But here's the thing, I could tell you all of that before we begin the experiment, you'd probably still do it. In fact, I could say I will give you cash money not to do it, probably still do it. Now, the reason we need to care about cognitive bias is a thing called choice architecture. And you can kind of see it play out in grocery stores. So here in the States, if you go to a grocery store and you want to buy some produce, the common wisdom is you don't pick from the top of the barrel because that's where the grocer is going to put the product they're most trying to move, right, the oldest product. They've architected that experience to benefit themselves. Now, they could just as easily have architected it to benefit the customer by putting the freshest produce where it's easiest to reach. But either way they architect that experience is going to affect the decisions you make within that experience. So think about what decisions your user needs to make. And then think about how people make decisions, not how a rational user would do things, but how someone who's making 95% of their decisions below the threshold of conscious thought, how are they going to make decisions. And as it turns out there are content and design choices that we can make to keep some of these biases at bay, or maybe even sometimes use them for good and that's what I'd like to talk to you about this evening. And that's going to be why dude if, as it turns out an experiment after experiment, if you have two identical resumes, and the only difference is the name at the top of the resume. If it is a male sounding name it tends to keep going through the process if it is a female sounding name it tends to stay in the pile. But here's the thing. Why do you need that information. What about the name is helping you the hiring manager figure out who to hire. Not like a signal to noise problem. The signal the thing that's actually helping you make the decision are things like the qualifications or the experience, what might actually be getting in the way based on patterns you may have grown up with are things like the gender, or the race, or really what you're reading into the name in terms of the gender and the race. Now the city of Philadelphia did in fact do a round of anonymized hiring for a web developer position, and they discovered a couple of things. One, the best way to anonymize a resume, even in the high tech world of web development is to physically print it out, get a marker and the next thing they discovered is as soon as they found a resume they'd liked, they would go to GitHub, which is a code repository for to see the web, you know, the portfolio of that web developer. The second they got to GitHub, they would see that web developers profile and all the personal information would be there and run the experiment. So clever people that they are, they came up with a Chrome plugin. They would anonymize that personal data as the page loaded just to complete the circle. They took that code and put it back on GitHub and it's there now if you want to try this yourself. This isn't just about helping people make hiring decisions. Amazon had a hiring bot that was supposed to help them sift through thousands of resumes. And it turned out to be very sexist so sexist that if it even saw the name of a woman's college on a resume, it would downgrade that resume. So when they tried to figure out how this bot became so sexist, they basically looked at how they trained the AI, and they trained it on the previous 10 years of resumes to Amazon which seems like a sensible thing to do until you realize, what did most of those resumes have in common. They were mostly from guys. So the AI took one look at that and said gee sure must like dudes and then just kept recommending dudes. We have this illusion that as some sort of you know brilliant objective cold rational thing, but in fact it's basically a glorified prediction engine. And if you point it at a racist sexist world, it will make racist sexist predictions. But this can go in the other way as well. So Amazon go as a store where you basically walk in, take what you want and leave. And Amazon will automatically deduct whatever you've taken from the from your cart, your account. This had a really curious effect for Ashley Clark Thompson who wrote an article for CNET called in Amazon go no one thinks I'm stealing. And what she's referring to as a phenomenon known as shopping while black, which manifests as hyper aggressive customer service. Can I help you. Can I help you. No really can I help you. In Amazon go there was no one there to do that. So by removing one key design element, namely the staff. Amazon go had created a less biased experience for this customer. We're even seeing this play out in the judicial system so in America the district attorney has incredible leeway when it comes to deciding whether or not to bring charges when a crime has been committed. And perhaps not surprisingly when the offender is black they tend to bring charges more often than if the offender is white, even for the same crime. So, similar to the resumes, they realized hey why do you need that information when you're deciding whether or not to bring charges. So I started creating crime reports that admitted the race and gender of the accused, the race and gender of the victim, and even the location of the crime in an attempt to get a less biased decision out of the DA. Another really good term to think about here is cognitive fluency, and it's this idea that if something looks like it's going to be easy to read, I'll assume whatever it's talking about. It's probably easy to do, but the same token of something looks like it's going to be hard to read. I'll assume whatever it's talking about, also hard to do. Now, I have been making a lot of pancakes lately this is a recipe for pancakes, and the text is kind of small kind of clump together. And I take a glance at that and I think you know what I bet pancakes are hard to make I don't know if I'm going to make pancakes. I take that same content give it full width imagery and smaller, more scannable chunks of text. And I might conclude you know what I bet pancakes aren't that hard to make. Maybe I'll make pancakes. A two minute video forget about it we are making pancakes. Now, you can think about this logic when it comes to deciding do I want to drive or do I want to take public transportation. I take one look at that printed schedule, and I immediately conclude you know what public transportation is impossible I'm not going to do it. I take a look at the app screen and I think you know what, maybe public transportation isn't that bad. Maybe I'll give it a shot. Now, I can't see your faces so I'm going to ask you to vote in your hearts. How many of you think that Marie Curie was born in 1866. Okay, how many of you think that Marie Curie was born in 1868. Okay you're both wrong she was born in 1867, but point is people usually tends to pick 1868, because if something is easier to read we actually think it's more true. But it gets worse. If something rhymes, we actually think it's more true. And this has consequences. Now, what's happening here is that we like things that are easier to process things that are easier to process feel more certain. I'll give you an example. If I asked you what did you get for your fifth birthday you got a toy truck right. You might not feel too certain about that answer right it's hard to remember. It's hard to process it does not feel very certain. If I said hey what did you have for breakfast this morning. You might feel pretty certain about that answer right it's easy to remember it's easy to process it feels very certain things that rhyme or easier to remember easier to process they feel more certain things that use big bolt fonts and clear imagery and clear language or easier to process they feel more certain. Now this is important when it comes to things that people need to believe. Here in the states we have a crisis where African Americans generally speaking, do not believe health information that comes from the government. In a 2002 survey when given the statement, the government usually tells the truth about major health issues like HIV AIDS, only about 37% of African American respondents agreed with that statement. By the time you get to 2016 that number has dropped to 18%. Now I could give you a whole other talk about why there are legit reasons that African Americans have concerns about health information that comes from the government, but the fact remains this is information that could save lives. So if it needs to rhyme if it needs to use plain language and pictograms so be it. Now when I first put this in the book my editor very wisely challenged me on this and said okay, that's great in theory, but can you point to actual instances where plain language and pictograms have saved lives. And I'm glad she did because it forced me to do the work and find these examples. You have a situation here where you have women who are smoking while pregnant, and when they were given materials written at the third grade reading level right easier to process, they were more likely to achieve abstinence during pregnancy. Stop smoking during pregnancy and even six weeks postpartum. Similar you have caregivers people who are helping other people take their medication, when they had a plain language pictogram based intervention. You saw decreased medication dosing errors and improved adherence to actually taking the medication. They might think okay that's great for pictograms and plain language but rhyming really. Let's talk about click it or take it. Here in the states we had laws passed that said if you do not buckle your seat belt. You can get a ticket. And the legislation on its own worked pretty well, especially with older drivers, but not as much with younger drivers. So they rolled out click it or take it and the results were that national belt use among young men and women aged 1624 moves from 65% to 72% and 73% to 80% respectively. And just to put that in more human terms, for every percentage point you go up and people actually buckling their seat belt about 270 lives are saved. So doing the rough math that's about 4000 lives saved in part through rhyming. It is silly, but it works. The whole easy to pronounce thing even extends to names so there's a literally a name pronunciation effect which says, if your name is easier to pronounce. It can go well for you it can affect things like voting preferences and occupational status right now with the caveat that easy to pronounce is a culturally specific thing right in some cultures, Smith is easier to pronounce and some cultures and be more is easier to pronounce. But if your name is easier to pronounce and the culture you're in kind of got it made. And the higher up you went in the ranks of the law firm easier the names got to pronounce. It doesn't even affect stocks right so if you have an IPO and your stocks about to launch. If the name of your company is easier to pronounce, it tends to perform better in fact if even the little stock ticker abbreviation is easier to pronounce, it tends to perform better. So we will put real money down on things that are easier to process. The biggest bias in the world for my money is the framing effect. And it starts out innocently enough, you know, let's say someone goes into a store and they see a sign that says beef. 95% lean and next to it is a sign that says beef 5% fat. Which one, you know what people are going to line up for. It's the same thing, but I framed one way to make it seem a little more appealing. And we're talking about beef but what if I were to say, should we go to war in April or should we go to war in May? You see what I did there? We're no longer talking about whether it's a good idea to go to war in the first place and wars have been started over less. Now if you are bilingual multilingual you have a secret weapon against the framing effect. If you think about the decision in your non primary language, you were less likely to fall for the scam. So I speak just a little bit of French. So if I try to think about the beef decision in French, it would go something like let's see beef that's buff. That's a lot of hours. 95%. That's Catherine. No, no, wait. And by the time I've gone through all that processing I can see right through the scam. Now as it turns out, you can use the framing effect for good. There's an experiment where you show a photo like this to an audience and you ask them should this person drive this car. And what you get is basically a policy discussion and some people will say all people are bad at everything. Don't let them drive and other people will say how dare you that's a just let people do what they want. And all you learn by the end of that conversation is basically who's on what side. Now you can show the exact same photo to another audience and ask how might this person drive this car. And what you guys basically a design discussion and some people will say what if we change the shape of the steering wheel or what if we move the dashboard. And what you learn by the end of that conversation is several different ways that person might be able to drive that car. I'm going to change two words, but changing the frame of the conversation. I've changed the entire conversation. In fact, what if I zoom out a little more and I ask how might we do a better job of moving people around. Because that's why the person was in the car in the first place. She was here, but she wanted to be over there. And if you frame it this way things like public transportation are on the table. Now the framing effect can even affect bias in student evaluations of teachers so student evaluations of teachers. Very typically female professors will be rated lower than male professors for the exact same coursework. So Iowa State University did an experiment where they gave the students two paragraphs before they filled out the survey, which basically said just that right this is an area where women are discriminated against when you are making your evaluation please make sure that you're using real criteria and not things like appearance. The students who got those paragraphs rated their female professors higher than the students who did not. Now I want to close by talking about our biases because these are the ones that can really get our users in trouble. So another one to really consider here is notational bias and it's the spice that shows up when you think about things like sheet music. I grew up playing the saxophone and I came to believe that any music in the world right could be written down like this and then I could play it. But the truth is there was a lot of music a lot of Asian music, a lot of African music for which this system just doesn't work. And if you make this the default it is very easy to eradicate and erase all sorts of culture. Now we're more used to seeing this when we ask for personal information right if I have it in my head that there are only two genders and I create my forms that way. It's easy for me to eradicate any number of identities. By the way while we're on the topic of asking for personal information. You should know about the self serving bias and this is a bias where if something goes well that's my fault is something goes poorly that's your fault. And it plays out with computers as well if I'm doing something on a computer and something goes wrong I will blame the computer if something goes right I'll blame me, unless I've given that computer a lot of personal information. The more personal information I give the computer the more likely I am to blame myself if something goes wrong, and the computer if something goes right. So we need to be very thoughtful. How often we ask for personal information because the more we do it. The more likely we are to create an unhealthy dynamic between people and their technology. Notational bias plays out all the time and structured content. And until 1986 the New York Times prohibited the use of the word Ms as an honorific for women. And the way this would play out is at the first mention of a woman's name in an article you would say her full name. And then every mention after that you would either say miss last name or Mrs last name. And the pattern this would set up. And remember how important patterns are the pattern this would set up is that the most important thing to know about a woman is whether or not she's married, then maybe her last name. So think about that an article after article for over 100 years, we need to be very careful how we structure our content and the kinds of editorial guidelines we create, because those are an easy way to scale bias. Language doesn't just describe reality it shapes it. And this is legally true. Back in the day in the United States we had Vice President Dick Cheney who was basically trying to figure out what he could get away with. And he asks his lawyers and they tell him look you are the Vice President, which means you are a member of the executive branch. But you also cast the tie breaking vote in the Senate which means you are part of the legislative branch. But wait a minute, you can't be part of the executive branch and part of the legislative branch. So maybe you're not a part of either. And if you're not a part of either, well then maybe you don't have to follow the rules of either. And he didn't. Same thing plays out today with companies like Facebook when it suits them Facebook will position itself as a publisher hey New York Times come publish with us we get you. When someone points out that there are rules for publishers all of a sudden it's from I don't know what you heard but we're a platform. And there are tools out there to help us think about how to write more inclusively radical copy editors excellent for trying to understand how to write about folks who usually don't get a lot of say in how they're written about. Another one is textile which is really good for writing more inclusive job descriptions because not everybody wants to be a rock star or a ninja. Another term to think about here is evidentiality and it's something that some languages have but English, not so much. English tends to use verb tense basically to let you know when something happens right so Bob went to the store Bob is going to the store Bob will go to the store. Other languages like Turkish make their verb tense do a little more heavy lifting. It's there to let you know how I know that Bob went to the store. So there's one verb tense for me personally I saw Bob go to the store. There's a whole other verb tense for somebody told me that Bob went to the store. And then a whole other verb tense for I don't know I run on the internet the Bob went to the store point is I can't tell you that Bob went to the store without also telling you how I know the Bob went to the store. Now think about how that would affect what people were willing to post what willing people were willing to say in a speech if they literally couldn't say it without also saying how they knew it to be true. Now English lacks this as a mandatory feature but I think as designers we can be more creative about how to introduce it right. So remember how I said that if something is easier to read, it tends to be more believable, but the same token something is harder to read, it tends to be a little less believable. So this is an article that came out when the first teaser trailer dropped for the new 007 film. And the first paragraph is basically confirmable information this is the name of the movie. This is in it. The second paragraph is a rumor about why the first director got fired. First paragraph, fairly easy to read, fairly easy to believe. Second paragraph, a little bit harder to read, a little bit harder to believe, and it should be it's a rumor. So I think we can start to get creative about how to use design to communicate to our users just how believable their content is. So I told you we'd come back to this for a long time I had a misunderstanding about how the scientific method work I thought it was. I have an idea about how the world works. I'm going to test that idea, but get a good result, a whole bunch of other people are going to try the same thing and if they get a good results great right that off that is a law let's move on. After talking to some actual scientists I found out it's a little more complicated than that. I have an idea about how the world works. I'm going to test that idea, write down what I did forget a good result whole bunch of other people are going to try the same thing. If they get the same result. Great. I get to spend the rest of forever trying to prove myself wrong. I have to ask myself if I'm wrong. What else might be true. Okay, let me go and try and prove that that is a much more rigorous approach and it was designed specifically to fight confirmation bias. Now, as designers it can be very easy for us to leave good design on the table because we fall in love with our first idea. Let me show you just how easy. I'm going to play a game with the computer and the computer is going to show us those numbers in that question mark and say put whatever number you want where the question mark is, and I'll tell you if that number fits the pattern. Put in as many numbers as you like. And when you're ready. Tell me what you think the pattern is. If you're like me the first number you try is eight. And the computer says congratulations that fits the pattern, would you like to try another number. And if you're like me say now I got this, the pattern is even numbers, and the computer says no. And the reason it says no is because I never tried this pattern is not even numbers. The pattern is simply that every number is higher than the number that came before it, which is a more elegant solution, probably easier to code probably cheaper to build, but I never got there because I was so in love with my even numbers idea. Now, there are tools to help us avoid this outcome one of them is called red team blue team. And the idea is that you have a blue team who does the initial research maybe gets as far as a wireframe but before they go any further. The red team comes in for one day and their job is to go to war with the blue team. They're there to figure out every hidden assumption every more elegant solution, every potential cause of harm, the blue team missed because they were so in love with that first idea. Now I like this approach because it's fairly cost effective I don't have to go to my boss and say hey, from now on we're going to hire two teams for every job and they got to check each other's work every single day. I need one team for one day to make it a little less likely that we're going to put something harmful out into the world. A great tool for this is called a speculative design and it's kind of like the show black mirror which if you've never seen it it's basically a twilight zone for tech you take some near future technology and you tell a story about what would happen if actual human beings got their hands on it. And the outcome is always horrible. I think anybody working on a new technology by law shop to write a black mirror episode about at first. But this is a real job right super flux went down to the United Arab Emirates to help them figure out the future of energy. And the question on the table is do we stand the road of fossil fuels, or do we start investing in renewables. So they said, Okay, look, let's figure out what your air quality is going to be like if you stand the road of fossil fuels, five years out 10 years out 20 years out. But they didn't just figure it out. They bottled it, and then they made them breathe it. And by the time you get to even 10 years out it is unbreathable. So by the end of that engagement the way you announced they were going to invest over $150 billion in renewables. Another tool you can use here is called an assumption audit and it's a good way to get all your biases out on the table before you begin a project. So, what you do is you get your team in a room and before you begin, you ask these five questions one, what identities does your team represent, and you're only going to identify as you feel comfortable, but you're going to think about things like gender or age or income. Then you're going to ask, How might those identities influence the outcome of this thing we're working on. Then you ask, Okay, who's not in the room. And then you ask, How might that lack of perspective compromise the design of this thing we're working on. And finally you ask the most important question, what might we do to include honor and give power to those voices during the design process right and I choose those words carefully include yes talk to people honor, maybe pay them for their time, give power are there people who are going to be impacted by this work, who have no say in how this work turns out. And how do we give them more say because they have to live with what we build. The final bias I want to talk about it's called the defamation fessionnel. Told you I speak French. And it's a bias we see the whole world through the lens of your job which in the workaholic world, a lot of us live in might seem like a good idea. Right up until it's not the papyrotsi who ran Princess staff the road that night, probably thought they were doing a good job. And technically speaking they were right they were getting really difficult to get shots that we're going to fetch them a really high price, but they weren't doing such a good job of though was being human beings. Now the former police commissioner Philadelphia guy named Ramsey. When he took the job he asked all of us police officers the same question. What do you think your job is, and many of them answer to enforce the law. Okay, that sounds like a reasonable answer but what if I told you your job is to protect civil rights. Now that encompasses enforcing the law but it's a much bigger job because it forces you to treat people with dignity. Now I've had this slide in here for a while and every year it gets harder to get through but I keep it in here, because it reminds us that now more than ever. We need to find our jobs as a matter of life and death. Now he was telling his officers that day that their jobs were harder than they think. And I'm here to tell us our jobs are harder than we think it's not just design cool stuff. Right, we have to find a way to define our jobs that allows us to be more human to each other. There are folks already working on this Mike Montero out of mule design has a little red book that basically is a Hippocratic oath for designers. The design justice network does amazing work in this area and they've got these 10 core principles I'm just going to read you the first two one. We use design sustain, heal and empower our communities, as well as to seek liberation from exploitative and oppressive systems. We center the voices of those who are directly impacted, but the outcomes of the design process. Now if you just try to stick to these first two you've got your work cut out for you. It's the difference between what Erica Hall calls user center design versus shareholder center design. We often think we're doing the former when in fact we're kind of doing the ladder. Now what encourages me about this work is that the problems right that we're facing are not new problems. The study of moral philosophy and ethics has been with us for literally thousands of years. And the more cool center for applied ethics does a really good job of taking all that wisdom and distilling into key questions you can ask about your work. Which option will produce the most good and do the least harm that's the utilitarian approach, which option leads me to act as the serve person I want to be. That's the virtual approach. All this wisdom is here. We just need to include it in our process. Another great tool for this is the tarot cards of tech. It's this website you go to and there are these cards and you click on them and they flip over and give you these really powerful questions about what you're working on. How might cultural habits change how your product is used and how might your product change cultural habits if Twitter had asked itself this before it launched we might be living in a very different world today. Even software engineers are getting in on this right so there's the never again pledge, which a bunch of software engineers and data scientists got together and sign and they saw that their work with data was being used to hurt immigrants. And they pointed to a very long and sorted history of data being used to hurt excluded populations and said we don't want to be a part of that history. And here's a list of things that we're willing to do to make sure that never happens up to and including destroying unethical data sets. Another collective action is even playing out at places like Google, we had Project Maven, which was a battlefield AI, when the people working on it saw what it was they said look we didn't get into this business to build weapons. If you make us work on this we're going to walk and Google backed down to they walked away from a $250 million contract with the military. And then they turned right around to dragonfly which is a censored search in China product. So this battle is still being fought. We must rapidly begin the shift from a thing oriented society to a person oriented society. When machines and computers profit motives and property rights are considered more important than people. The giant triplets of racism, materialism and multiple is more incapable of being conquered. Now this isn't some software guru giving a TED talk. Over 50 years ago he saw this to be true, and it is only more true today so the question I will leave all of us with this. How can we define our jobs in a way that allows us to be more human to each other. Thank you. Thanks David for this exciting. Are they angry are they happy are they sad right it works extremely quickly system to is more for like what's 1072 times 556 right you can't just snap judgment that you got to really think it through. And so system to kind of, you know, most biases are operating at the system one level works like snap decision I've decided this thing. So system to is better for thinking things through and so thinking about that decision in your non primary language is a way to kind of force yourself into system to because you have to think slower. Right to think about something in your non primary language when you're thinking in the language you're fluent in, like literally the word fluent right it's like it's quick I don't have to think about the words I'm saying right now I can just start talking. If I could tell this in French, I'd have to be like okay, because say the right I already started slowing down right, because I can't think that quickly. So any way that you can kind of, you know, add some molasses right anyway that you can kind of slow down your thinking is a good way to fight bias. Great that that perfectly makes sense. Thanks David. So we have one more question from Mr. Vikram. How do you work in an environment where there is a confirmation bias and you are not a stakeholder. So yeah I to be honest I have a whole other talk about this. The third chapter of my book is about stakeholder bias and the basic premise is that just like us just like our users are stakeholders are making 95% of their decisions below the threshold of conscious thoughts so they've got their handful of biases to. And so there's any number of biases you can kind of be aware of to kind of navigate that relationship. They're kind of like you know wrapped up in confirmation bias. One of the ways you can kind of get around that is to think about something called loss aversion is just one example. So loss aversion is a bias where it hurts twice as much to lose something as it feels good to gain something right so it hurts twice as much for me to lose $10 as it feels good for me to find $10 in my cushion. Especially at large organizations. They tend to major in this because the bigger they are the more they have to lose the more risk averse they are. So when someone is feeling like that it's actually better to point out the downside of not taking risks in the upside of taking a risk so if they're locked in confirmation bias around let's say some legacy software that they've been using that everybody hates, but no one does anything about because they've just got confirmation bias that there's no way to go. Rather than come in and say hey if I have I have this new software if you get it people are going to be so happy and we're going to make all this money. It's actually more effective in that case to say hey, if we keep using this legacy software. Here's how many people are going to quit. Here's how long it's going to take to replace them. Here's how long it's going to take to train the replacements. Here's much productivity is going to go down while all that's happening right. So when you finish painting that picture, getting the new software seems like a great idea, right. And so that's just one of the many biases that stakeholders tend to have but it gives you an idea that it's about understanding, like, what their priorities are and what they're thinking about and how you kind of navigate that. Very well said actually that's very true. There isn't a very impressive women's question actually from when could your talk is very impressive. Right. Is there any bias which is called captivation bias. I don't know I suspect there is I mean there's definitely just the fact that I am one of the biggest heads on the screen right now right it's already biasing right like just visually that's drawing you in. I'm a very practiced speaker. I don't have a lot of ums and so there's a rhythm to what I'm doing. Right so if you think about the impact of music, or if you've ever seen a stand up comedian I'm using a lot of those same techniques. I don't know the formal names of any of these things but yeah I am enter I'm I am entertaining you. Right so if you think of it that way like if you go see a show, and you're captivated. Yeah, there are certain techniques that are coming into play that keep your attention. I don't know them like specifically in terms of biases but there's definitely these things that we respond to as humans. Right and you can see the same thing and web design right a really well designed web page kind of draws you in moves you through the page gets you to focus you to all of those are very basic cognitive things that are happening. So, so I think collectively you might call that a captivation bias because it's a lot of little biases that are working to keep your attention. But yeah I'd say there's probably very much something like that. Thanks Dave thanks for this so I think that's pretty much all questions we have from the participants. Thank you all for attending the session.