 So biases, cognitive biases play a large role in software engineering when it comes for example to estimation task to group effects and Generally the perception when you for example debug your code One example that I took from a textbook on especially time estimates How long will the project take along with a feature take and which you can look up in the slides is Imagine your boss coming to you and he or she wants to get an estimate on how long a certain functionality will take Then just by saying certain things your boss might consciously or not Basically push you to give Certain answers that are maybe shorter than they should be for example if he or she goes How do you think how long do you think this little task will take the little in this word? Indicates you that it's a small task. It shouldn't take very long. So you will actually adjust your estimates down To make them shorter than they should be if your boss goes Do you think it will take more than six months? Then you have this six month in your head stuck and even if the task might take two years or two months You'll have a very hard time to get away from these six months. So you make an estimator is much much closer to that And then finally if your boss tries to motivate you and goes well, you know, this functionality is really important for our company So if it would be done earlier, this would be really good Again, this will sort of push you in many cases unconsciously to give much shorter estimates so it's very important to be aware of these kind of effects that are going on and Maybe the most important thing is to first accept that these things happen So it's it happens to everyone. We are all susceptible to these biases and it's not No one is really immune to them So the first one, that's why we start with it. It's the so-called confirmation bias And this is a bias we can observe very much also in politics and everyday life and the kind of filter bubbles We nowadays talk a lot about in the current COVID situation confirmation bias is the tendency of an individual of a person to ignore information that goes against your beliefs and Pay overly much attention to sources to information that supports your belief. So basically if you have Republicans and Democrats in the US, for example, you have these very heated debates and if you're on one side It's really hard to understand why the other person is not following your your facts your sources that you give them and they might be Suffering from this because they basically just look at the sources that confirm whatever they think and the others They are ignoring similarly. You have very many political arguments in general If someone is very has a very strong opinion on something they will have a hard time accepting other kind of information Even if there is evidence if it's Scientific work, it doesn't matter. So this is an extremely strong bias That gets us to basically follow whatever we already believe basically so What we have in software engineering When it comes to confirmation bias is for example, something that's called positive testing And that's the tendency that you write tests that confirm your own way of thinking about a certain functionality for example, you're testing a function and you end up writing tests that mainly test The kind of positive cases how you think someone should use the function And then the classical example is the customer or the user uses it the first time and it instantly crashes Because they don't use the application in a way that you envisioned simply because you had the idea that okay This is the way it should work. This is the way the person will use or the programmer will use my functionality, so we tend to do positive testing We tend to search Known documentation so imagine you are trying to Find information on for example how to use a library We always favor things that we already know so we rather go into the documentation into chapter one that we have already read Because we think there might be something even though there might be other sources in the documentation are much more useful So maybe chapter three is exactly what we're looking for But we tend to look in the in the areas that we already know to some extent Unconsciously of course so Then similar effects we see When it comes to the changes We tend to resist large change requests, so if for example the customer or an Outsider if you're on gipthub comes and says we need to change this It's a very large change. You know, it's a lot of work Then you will unconsciously resist this you will basically try to find arguments of why is this change request not necessary Maybe there's something smaller that could be done. So We tend to resist this and that's of course very important if you have a supplier client or customer relationship Then here that might the arguments might start at the customer things We really need this change and a supplier will try to come up with arguments why that is not needed So that's Another one and finally We like to do trade-off studies. For example, you try out something with two architectures, which one is better We usually have an opinion so we tend to ignore results of Trade-off studies If they go against our opinion So again, we'll try to find arguments on why the architecture that we favored in the beginning was better to start with Why did we why did we do this? It was clear anyway Importantly on all of these things here. I'm not just writing them down But on all of these there exists studies that have looked into this in detail So it's not like I'm making this up. There is sufficient evidence that Indicates that this is really happening a lot. So the question is what can we do and The answer is sadly not that easy since as I said, this is an extremely strong bias and you can observe it in everyday life There are some things that might support these things for example positive testing one thing we might be Trying is test-driven development, which has been mentioned earlier in the agile lectures So the idea that before you have written the functionality you might actually want to write the tests Simply because then at least you are thinking a bit more about the use case about the requirements and not only about how you implemented it If you have first implemented a functionality, then it might be harder to get away from exactly how you have designed it Nevertheless in order to write a test you have to think about how how is the interface of the function? How the parameters how the return values how should it behave? So you do put some thinking into that and there is a chance that you will anyway do a bit of this positive testing The other thing that is often mentioned is trace tracing or traceability So the idea that you have traces for example your code is linked to requirements or the requirements are linked to tests because then for example searching the documentation or Talking about change requests might be a bit easier because you have you cannot just ignore the traces You cannot say if tests are linked to one requirement Then we'll just start searching in a completely different part of the requirements document But at least this might support you a little bit. So These are smaller things, but nevertheless they are a start maybe The other thing the other bias I'll mention in this video and then the board is full So I'll do another one. The other one we'll talk about here is anchoring bias And this is the one I gave in my example in the beginning when I said do you think it will take more than six months? We have the tendency to anchor ourselves to get stuck to certain numbers or facts that we have heard early on So it's really hard when you have these six months in your head. Don't think about the pink elephant It's hard to get away from that. So That's what the anchoring bias is and in software engineering and in many other projects It's basically the tendency that we have our initial estimates. For example, we estimate that the That the project will take six months based on very little data and then later on when we have a much better Understanding of the project we could make better estimates, but we have a really really hard time getting away from this So we have our initial six months and our estimates in the future will always be close to that. So that is Really hard to do and there's a lot of estimation going on in software engineering. It's not only about planning time So it's generally planning information maybe having a group discussions It's very easy to get angered to these certain things and techniques that address this a little bit of for example parallel estimation and That is if you think about agile development There is for example a practice called planning poker where you have the scrum team and or the XP team And at the same time they sort of reveal what they think a task or user story will take So everyone at the same time says I think it will take two weeks I will think it takes one week And the idea here is if one person would start if one person would say I think it takes six months Then all the others would get anchored to these six months So if you do it at the same time, there's not as much risk for that And then you might get a much broader spectrum and you can actually start discussing So you avoid a little bit this single initial estimate that comes in the beginning The other thing is of course Estimates are often based on expert opinion We can try to get away from that and instead have some kind of model-based estimates For example some kind of formulas that we can apply And you just put some numbers in and then you are either not at all relying on estimates Or maybe they're at least not as critical So the overall duration of the project is not just a person saying I think it's six month But it might rely on a number of calculations that are fixed that you don't have to Estimate yourself, okay So these are the first two In particular you should very much be aware of confirmation bias. It's extremely strong. I'll keep mentioning that because it really applies in Most everyday life and the software engineering is no exception there