 Hi everyone, my name is Deepak Kaul and today I'm going to be talking about four testing lessons from USA Airways Flight 1549 that landed in Hudson. It's very famous. It had a movie made on it called Sunday. So I actually read the National Transportation Safety Board report and while reading that I thought maybe there are a lot of lessons, not testing in particular, but overall leadership lessons as a life. So I tried to draw parallels with how as a tester we can learn from that incident and probably make some habits in our daily lives. So I really appreciate you all taking time to attend this session here in DevCon and I hope I can make your time work fine. So here we go. So a little bit about me. My name is Deepak Kaul. I'm a QE manager in Red Hat. I'm based out of Red Hat's Pune office in India and it has been more than nine years in Red Hat and before that I did around five years in a company called PTC, which is again Boston Seaport Paste. All right, moving on. Okay, just take three seconds to read it. This is quoted from that National Transportation Safety Board report which I was mentioning earlier. Just the shuddering sound, it actually gives me shudders just thinking about it. So what it tells us, see modern jet engines are certified to handle birds, some kind of birds, right? At the point when this incident happened, the jet engines, the commercial jet engines at least were certified to test multiple bird hits with the bird size of 2.5 pounds max, or maybe some large birds around four pounds, but a single bird, right? So what they'll do on in those engine factories is that they will throw dead birds at their running engines to check that the engine does not lose power or the thrust up to 70% limit when it is hit by those birds. But during this particular incident, the flock of birds that hit flight 1549 was migratory Canadian geese and they can be big, right? They're not just ordinary birds. I mean, nobody's ordinary, right? But still, they're not smaller birds, right? They are big. The male Canadian geese can grow up to 8.5 to 9 pounds in size. Plus, it was a big flock that hit flight 1549. So what the damage they did was that they not only hit the turbine blades of the both engines, but they also hit the central core. And then the flight had to land in Hudson, as you all know. Nobody, there were no casualties. Some people got minor injuries according to that report. And it was the good things were that both the pilots, like the captain and the co-pilot, they did tremendous job, as we all know, Captain Sully and Captain Skiles. The crew, the flight attendants and everyone in the crew, they also did a great job of evacuating passengers on time, because the air temperature was very cold. The water temperature was very cold. It was January in New York at that time. And the crew on the ground, the safety crew on the ground, also did a good job coming to the incident site and ferrying passengers out of the flight to the safe zones, the warmer safe zones. But again, birds, they can be terrible, like especially Canadians, you can see here, right? So how does it draw parallels to testing? So not very long ago, we were testing the file attachment system and for the support site of Red Hat. And while doing that, we also had to do some kind of benchmarking and performance testing, right? What is the biggest file size that a customer can attach to a support site? 100 GB, probably maybe 200, because the person who is building the system does not own the business. And one day I remember I was sitting with one of the managers in the support delivery. And they told me that one of the Telco customers had an attachment on a case very recently that was 800 GB. And then that changed my whole world view of attachment size. The lesson as tester that we learned from this incident, especially that in the pursuit of releasing software on time, we should not be cutting corners on talking to actual users. And if possible, let's say talking to users is not possible, as was the case with me. At least talk to people who are doing customer facing work, right? They are probably your best chance of getting the first-hand knowledge about how that piece of software is going to be used. So in this case, a Canadian who is equally that 800 GB file that I was talking about, right? So had I not casually talked to that support delivery person that day, I might have still not known that 800 GB file attachment was a reality, right? Sometimes your world view is limited by the stuff you are doing in day-to-day work, right? But the people who actually work on your software, who actually use it on a day-to-day basis are the people who know the actual kind of data, the shape and size of data they deal with every day, right? So that is lesson number one. Work with users, actual users, if possible, if not possible, work with people who deal with those users on a day-to-day basis to know more about the kind of test data your application is going to consume or generate. Okay, moving on, task saturation. So even though we remembered this incident as something which was successful in the end, but if you read the report, it clearly says that there was a lot of structural damage to the aircraft as well as there was some minor injuries and a couple of major injuries to one of the flight attendants also. And that could easily have been prevented. The thing is that when the captain and the co-pilot were going down towards the river, they were getting two kinds of warnings. One was the presence of terrain because the flight was very close to the ground. It should not have been so close at that point. So that was one warning. The second warning was about the airspeed. But that day, at that time, the terrain warning overrode the airspeed warning. So at no point during that crucial one or two minutes, both the pilots did not get any airspeed warnings. And that meant that they landed at a speed which was faster than the speed they should have landed on. And that meant that it was a very hard landing and there was a significant damage to the aircraft as well as that impact of the landing caused injuries to the passengers and the crew. So how do we draw patterns of this situation in a testing job? So what I think is that we have automated test results and monitors as well as a lot of monitors for multiple applications. We get failures, especially in test automation. Flaky results and false positives are very common. You cannot get away from them as long as you are testing the front end or maybe the end-to-end APIs. False positives are common reality. So there are times when you have to make a decision of go or no go for a product in release and you have a turn off false positives or something which takes your attention away from the real problem. What we think normally is that multitasking is something which we all do. And as managers, we put people in multiple projects at the same time, attending multiple standard meetings and everything to probably make sure that they are feeling challenged and everything. But at a psychological level, multitasking is a nice thing. Every time you multitask, you context it between two or more tasks. You are actually losing out on your productivity by a ratio of 80%. So the thing is, in critical times, let's say for a release or a flight landing, you need your 100% attention on the problem. And for that, because it's a big decision to go or not, you need the right amount of data in your head to make an informed decision. So that is where these false positives from a turn off the test suits that you have written can make you give or maybe make a bad decision. So take care with your tests. Make sure that the tests that you write are not shaky. Even if it is a reality, but make sure that you have prioritized your tests in a way that you're not worried about low priority tests failing versus and that noise concealing behind one particular high priority test that is failing and you don't know about it. Because there are 40 other low priority tests that are failing. So that is lesson number two, prioritizing things in right order so that when you have to make a decision as a tester, you get the right data in your hands from your automated tests. Okay, moving on to lesson number three, training. So everyone in this world that I've met has a completely different view on training that some people totally despise training as a boring and mundane activity. Some people like training, but talking about this flight, both Captain Sully and Captain Skiles had 40,000 years of experience flying planes between them. And I was watching this interview of Captain Sully and he said that since Captain Skiles had recently done an airbus training, he was able to go through the checklist and all the procedural things faster than anyone else could have been at that point. And that gave Captain Sully enough time to decide and focus on where to land the plane. There was this complimentary actions going between them because they didn't have time to talk to each other and discuss what to do because it was that kind of situation. So the point I'm trying to make here is that you can call it anything, right? If you are trained well enough in your craft, right? You are well equipped to take those informed decisions in a moment of unforeseen circumstance. That's the point. You cannot always train for unforeseen circumstances because they are by definition unforeseen, right? Even both the Captains in this flight were not trained to do water landing. Nobody is trained to do water landing in their simulation schools or flight schools but because they were trained in their craft of flying the planes and knowing the aircraft, that particular kind of aircraft. Let's call that a context in testing terms. They did it well, right? That's the difference. I also read this book called Hard Things About Hard Things by Ben Morales. And he said one thing which I still remember. He said that saying no to trainings because you are busy is just like saying no to food because you are too hungry. So by that remark, I just want to emphasize the point of training. Let's say you train well in your craft and then there is a moment where you need to make a critical decision. You will be better equipped because let's call it expert intuition or let's call it heuristics in testing terms or let's call it muscle memory in sports and other terms, right? All these things are developed when you train well in your craft. So lesson number three is training. If you are a tester and you are not training yourself in different situations, different ways of doing testing, then probably you're not equipped or capable enough to handle any situation where you have to make a critical decision of a release going on. So lesson number three for testers is train as much as you can in terms of testing. Try to test different kinds of applications and even manually or writing automated checks however you want. But do it in a more comprehensive way. Train yourself in a comprehensive way. All right. By now, you know that the major cause of the incident was that the plant engines were tested, certified for the birds of smaller size, but the birds that hid the plant that day were slightly bigger birds. But did those birds appear out of nowhere just like that or were they always crossing the skies at that height at the same time of the day or probably during a particular period of the year? What were the migratory patterns of those birds? Are those birds, those heavier birds found around airports majorly due to probably an ecosystem of food present there? So all these questions, right? So there are different bodies, right? There is ATC, there are pilots, there are airport authorities, and then there is, there are regulatory bodies like BDC and MFD group who develop regulations and I think the point is when you look at software development, we also have all these discrete units of wisdom, right? There are developers to food and then there is project management, there might be product management as well, there is business analysis, then there is this engineering management. But the one entity that is supposed to look for implicit, which in this case is R4 plus and finding implicit, the entity that is supposed to look for implicit in these testers actually, right? We are the ones who have to look for those corner cases. There is a ton of implicit requirements and then there is just a bucket of explicit requirements, right? So how can we, how can we do that? First thing is if you're a tester or a developer and see our engineering mindset or engineering identity normally takes over our testing, testing identity, right? That has to change first. So as humans we have this triangle of identities, right? For example, if you talk about me, I have a identity, an Indian identity, a tester identity, but somewhere in between I also have a problem solver for an engineer identity, right? But if you are a tester, your tester identity at all times should override your problem solving identity or the interior identity. And how, how can you tell that this is happening when you look at a ticket or a JIRA? If you're trying to solve that problem straight away, you look at a JIRA as a set of instructions to do something, then you can, you can know that this is a problem solving or engineering mindset you are looking at. But if you look at a JIRA or a ticket as a placeholder for carrying out further discussion and investigation, then you are probably thinking as a tester. That obviously that would slow you down, right? You're not straight away going into action. There is more deliberation than action. But even if it slows you down, that's okay. In the end, you don't want to embarrass you and your whole team, right? The point is look at your JIRA tickets, your tasks as placeholders for carrying out further discussions on the same topic. In the end, the discussion does not mean that you have to talk to other people. You have to talk to the people that is for sure. But that also means self-talk and reflection, right? Reflection about the problem. What kind of problem is it? Do I know enough? Investigation, not investing, investigating the people, but also investigating the software itself, right? Not interrogating the people, interacting with the software. So there are so many things that you can do, right? You all are testers and you know better than me. But the point is never ever look at a JIRA ticket or a story as a set of instructions. That is lesson number four. And that is how you find implicit. In this whole story, in this whole incident, finding the migratory patterns of those big birds and eating these was the implicit, right? If you look back at this, there should have been someone, probably an entity, a body, a government body or anything. That must have known that there are bigger birds, right? And there are bigger birds in the sky, in the flight paths. And the current certification of jet engines is only up to 2.5 pounds, right? That does not make any sense. But the thing is, there never is in this discreet world where each team or each entity is just thinking about their own work. There never is such a body which can connect the dots, right? And that is where in software context, testers come into picture, right? That is why a tester role is so important to talk about. All right, I think we are done with our four lessons. I'll just give you a quick recap. The first lesson was, think about whenever you are testing, think about what kind of test data or what kind of data, let's not call it as data, what kind of data your application is going to consume and what kind of data is your application going to generate. Don't pay attention to it. Second thing is prioritizing your tests. Kill the noise and focus on top information for you to make decisions. The third thing is framing. If you are a tester, test complex applications, test in different contexts and eventually you will become a great tester by just taking this part of it. And the fourth one is shunning your problem solving mindset and getting your tester mindset to take precedence when you are looking at a problem. Okay, now seconds.