 All right. Hello, everyone. So next up, we have an ask the experts panel on his quality of fault line in software development. And we have our experts, Anisha Narang, Deepak Kall and Shreya Bansal. I guess I can try reaching out to her in the mean. But yeah, you guys need started. So yeah, take it away folks. And audience, if you guys have any questions, please put them in the chat. And we have our experts. I will go ahead. Can you all see the screen? Yeah. Okay. So hi everyone. Welcome to ask the experts panel session on if quality of fault line in software development. There is no perfect definition of quality. Quality is perceptual, conditional and somewhat subjective attribute and maybe understood differently by different people. When we talk about software quality, the perception of quality is different for everyone involved in the software development process. What the project management thing, what the quality engineer thing, what the customer thing and what the developers think quality has a different meaning to each one. To each one involved. And that's where it becomes a fault line in the software development process. A fault line is nothing but a divisive issue or a difference of opinion that is likely to have consequences. Let's say in many countries, religion is a fault line. And in the context of software development, the consequences could be delay in the release cycle or an extra cost to the company. Hi everyone, I am Anisha Narang. And before we go ahead, let's take some time to introduce ourselves over to you Deepak. Hello everyone, my name is Deepak Kaul. I am a quality engineering manager in Red Hat and I have been with Red Hat for all the years now. I also am a bit of a quality engineering enthusiast and learner. Over to you Shreya. Hi everyone, my name is Shreya Bansal. I work as a QV lead in Red Hat. My expertise revolves around test automation, test management, process improvements and always striving to know more about things and setting up on higher quality standards. So yeah, that's me. Hi, so I am Anisha Narang and I have been with Red Hat for over seven years now. And I worked with the different test automation tools including Selenium Python, Cucumber, Waterflow Tractor and I still continue to explore more in the world of test automation. I have also spoken at different tech conferences on topics related to quality engineering. And I had a chance to be at the Deaf Point US back in 2018. And other than that, I recently started with some self-guided yoga and I've also done quite a bit of solo travel. So that's pretty much about me. So Anisha, do you want to lead us into... So if anyone has any questions, you can straight away drop them in the event chat. You know that topic, right? Is quality a fault? Do you think quality is a fault? Or everything is really going really well between project management, QEs and developers? And by the time you think about questions, I think we should probably get started with the questions we had for each other probably, right? So maybe I can ask the first question Deepak to you. Since we say that quality is a fault line in software development and we also have been talking quite a bit about testing with context-dependent. So what do you think about the context-based school of testing? So when we talk about context-driven school of testing, so that is slightly different from the rule number four of the seven principles of software testing, which says that testing is context-dependent. So the context-driven school of testing is somewhat a change in ideology, which was developed by Dr. Kamkenar, Brett Pettichord and James Buck in their book called Seven Principles of Software Testing, if I remember the name correctly. The context-driven school of testing thinks of all testing as exploratory, which is, so there are two ways of looking at a process, right? One is the more mechanical factory way. If you remember the industrial revolution and the automobile revolution, there was this thing called Taylorism, which was in I think 1900s, which meant that the way in which process was improved was based on certain measurements, so it was also called scientific evaluation. The other thing is not believing any of the other ideologies, not believing any of the best practices. And just trying to learn from whatever is your context and then start improving on it, which is more an empirical way of doing things, right? So Scrum is based on that and the context-driven school of testing is also based on that. It tells you that at least software is context-dependent, that's why a project can evolve in any direction possible, right? There is no best practice. There is a good practice and that too in a certain context. So that's why you should probably think about that context and test in that context by constantly learning about the software through interviewing people and evaluating software by experimenting and stuff. More of a scientific approach and empirical approach of doing things. So the emphasis is more on a skill tester and the product versus in any other analytical school of testing. The emphasis would be on processes like building the pipelines and doing the defect, all those matrices which we all know are present in our lives, right? The defect density and defect coverage, how many tests per kilolines of code, stuff like that. All right, so in the same way, we are still talking about context-driven testing, right? What context we are talking about here, Anisha? So I think when we talk about context, we can slip this into three parts. Like the context can be functional, domain, or business. The domain should not be confused with responsibility. Let's take an example of an airline's application, right? If we talk about the functional knowledge and we say there are two different applications, like two different applications, one by Kayak and another by Expedia, and they have two different workflows, right? So that knowledge that a QE needs to have will be functional, like after what step? I think we lost you, Anisha. Can you hear me? I'm sorry. Can you hear me now? We can hear you now. So that's what we would say as a functional knowledge, which is very, very specific to a particular application, how a particular application works, what's the workflow of that application. Then we come down to domain. So when we say domain, let's say a QE has already worked on a similar application, which is from the airline's industry also, right? So that would be very domain specific. Let's say if there are two separate applications from the banking sector, that would be a separate domain. An airline would be a separate domain. So that's where the domain knowledge comes into the picture where we need to have knowledge about how a particular application needs to be tested. Like a banking application would be tested much differently from an airline's application or maybe an e-commerce website. And then when we talk about business, business knowledge is something that we, how we understand, if we understand the strategy of the company correctly or not. If we take an example of the airline, a few companies can have a strategy where they're offering luxury flight options, where in another one can have a strategy where they're offering cheap flights. So that will change the way you test your application. So based on the strategy, the way you test the application will change. So when you have, it's very important for QEs to have functional domain and business knowledge, business context, whenever they're testing an application, so that they do not lose out on the whole intention of application. That's what I think about it. So one more, I'm sorry, one more connected question. So you said that there are three domains, right? There is, there is a functional domain, which is the very granular application level knowledge, application level context, and then there is an actual domain context where you belong to a certain market life. And then there can be a CMS tester or an MS tester or maybe a CRM tester or an operating system tester, right? Or, and then there is the business domain, right? What is the, what is your company strategy, right? Do you even know when you say that a customer really feel bad about this bug? Do you know, do you even know your customer correctly, right? Who are your customers? Are they, are they high-flying rich people or are they, are they middle-class people who are trying to grab cheap tickets on sale, right? So, so when, when you, when you're talking about this, I remembered one incident where I think having, being strong in one particular context out of three actually is, actually does a disservice to your overall overall testing. So I'll, I'll give an example. Back in the day, I think 2009 or 2010, I was working in this company called PTC, which was a PLM product development company. So there was this PLM market leading tool called Vintrin and we used to hire QEs from mostly the competitors because the thinking from the QE leadership was that we should probably hire people, the QEs who have at least some domain knowledge because PLM is a complex domain where you, you, the, some of the workloads remain the same where every, so let's say there is a, there is a cell phone. So every part of the cell phone, the least divisible part, including this like smallest nut has all the associated CAD diagrams and data, their suppliers and their specifications, everything is in that system. So once we hired QEs from competitors, what happened was they used to come with their own domain knowledge, their own domain plus their functional context, right? And it became very difficult to make them unlearn some of those things so that they can again start contributing effectively to our systems and our domain. Even though it was the same domain, but some of the things were, they were so stubborn on some of the things. So it sometimes it used to cause a lot of arguments for the team. All right, so what's next? The next step is also similar to what we have talked so far is context. So what steps do you take to continuously rebuild that context? So I think, I think we should probably, as I said, as I said, software is not a real science like physics. There is no hook slot, there is no tensile strength of steel, right? Any project, any software project is mostly a social software development, right? You talk to people, you get requirements, you must know your customer and stuff like that, right? It's more about people than about technology. So what happens is you cannot rely on best practices. So once you build your application context or the functional context, when you are part of a scrum team, that would automatically, you will be automatically in a position to rebuild that context as you make changes to your application iteratively in a scrum team, right? And then comes the domain context, right? So let's say Shreya, especially for you, let's say you are working in a CMS domain, right? The customer portal. So Drupal is one of the technology which we use, right? So if I was, if I had to advise a Drupal QE to probably rebuild his domain context, so I'll tell them to probably learn about other CMS tools as well, like WordPress or Joomla. And what are the domain level changes happening in that world, right? Maybe something very generic happening in CMS, which would apply to even Drupal as well. What are the real world problems which Drupal has and stuff like that? And then for the business context, it is, it is, I cannot recommend it enough. Every person in a company, especially QEs, should know that what their company strategy is. The strategic escalating right from the CEO level to the R&D level should flow seamlessly to the last QE in the team. If you don't know which direction your company is going, so you are lacking your business strategy, which will mean that you will file bugs which would be obstructive, which would cause friction with developments and probably arguments with product management and a loss of time and effort for everyone, because every bug has to be tried, right? Yeah. All right. So again, everyone who is hearing this, please, if you have any questions based on the discussion so far, you can, you can post them in the chat box here in the event chat box. So Shreya, I have a question for you. Can I say something? No, no, no, I was just asking you to go ahead. So what do you think is software quality after all? Do you think is it internal versus perceived? Is there a difference? Internal versus external quality? I would say let's first understand these two things. Internal quality has to do with the way how the system is constructed. It is much more granular measurement that conceives. It could be like code that that should be clean and is code there, how many components you are using. So it is basically the quality can be measured measured through different predefined standards here. And there are several other tools for linking as well as unit test. Everything is there to help you out towards the internal quality if we focus to it. Internal quality affects your ability to manage and reason the program. Like is your program able to cope up with new requirements easily? Or maybe is your program efficient enough to deal with an inevitable increase in the data volume? So we start from a very small application and then we go forward and it becomes really huge. Then what is our plan towards that infrastructure? Or maybe it could be like is it our domain logically decoupled from the framework so that it can be updated without actually breaking the system? Or you can just say that do we have the test to guard the existing functionality? So all these things suffice internal quality. Whereas what comes in perceived quality that defines the user's perspective. The user's perspective towards the application towards the program. For example, for a user it doesn't matter if you load 10 MB of data in two seconds because he is going to see the two seconds performance delay. So from their perspective your site is slow. If the response is provided as soon as the user hits the query and sees it, then it is a perceived performance. Or maybe we can take another example like we used Facebook and Coray in our day to day life. And it gives you perception that whenever you are scrolling you have everything loaded in your device in your hand. But is it actually because it is loading in the background right now and you feel like we have it already. So as a user we have this perception that this app is fast or maybe we can take simple examples of loading buttons, progress bars. If I click on something and it doesn't show me anything, no progress bar or something and it takes some seconds to retrieve the data and show it to me on the screen. I will feel that there is something wrong that I have done or maybe I'll be confused for a second. So these things combines to perceive quality that how user is feeling about it and internal quality and perceived quality we can differentiate them. It's a combination if we talk about software quality, then it's a combination of both actual performance that metrics that we techies use and the user's perspective. So both of the things should be combined when we talk about software quality. That's what my perception is. So I think it is that great debate of clearing technical debt versus giving out new features. If you ever have done some kind of stakeholder management with internal or external customers, you know that customers always want high performance as new features. They don't care about what your technical debt is. What platform improvements do you want to make or maybe you want to write more tests for two months or whatever, right? They don't care about that, but what they want is what is valuable to them, which is the perceived quality. And that makes me remember that funny example where I don't know whether it is true or not, but where it was alleged that beats by Dr. Dree those fancy headphones. They use extra weights in each of the earpiece so that whenever a user picks that headphone up, it feels very heavy and very high quality. Just to manipulate the perception of what high quality headphone is. That will reside in perceived quality. It's a good example. So do you think that the price that is for the additional effort to increase quality is worth it? Yes, I would highly recommend that. Obviously what we are providing like we are hired for softwares and providing the external quality is the reason we are building the programs in first place. We always cater to external quality first, but we have to be conscious at the same time of the state of internal quality. This is actually going to facilitate the future growth. If we don't have it done today, we will have to spend some time tomorrow and that will also come as a problem to us later. So I think being that if your system is robust, we have small flexible components that can be composed in different ways. We have all these metrics in place. We will find it easier to add or improve external quality. That's the thing that we are delivering and for delivering that we need internal quality to be up to date. So I think it's worth it because it's a hand in hand process. You can't leave one and go for another. I think it's absolutely worth it to invest in internal quality and as it's already proven that if you have more technical depth over the time, it will eventually slow you down. So I totally agree to the fact that internal quality is absolutely worth the effort and everyone should be paying some attention to that. I do see we do have a question in the chat which says, do you have any pro tips for developers to keep in mind to ensure that the software written is of high quality? Yeah, there are certain metrics like no bugs, expecting what is the visual testing in terms of what is our visual expectation for internal quality. You can design the infrastructure and go and go back and review it. And also just to be adhering to that, if we are not doing the thing, we are not working on internal quality. We didn't have the proper infrastructure in place then we have to take it into our next prince or whatever the cycle is to improve that. So that is like developers have not actually developers, it will be the full team who should be contributing towards it. That's what my. I think you can also pour in your suggestion. For developers, I'll again say three things. The first thing again is, even if the context we talked about, the functional context, the domain context and the business context, I think that applies equally to developers as well. Even though we are talking about as engineers right now, but a developer with a great domain knowledge of a particular like if you look at the investment banking companies and the financial sector companies, they always pick their new talent, even the developers from the same domain. So there must be some value in having a developer with domain context. Right. So I think if you are a developer new in a in a particular domain, you should probably try to learn more about that domain. That builds the context and then developers should at least, you know, I won't say fight with product management probably recommend. I recommend having a regular technical debt clearing designated time every sprint to clear the technical debt and for the platform improvement. As Shriya said, it's it's initially it looks like it looks like it looks like cost that we are paying the technical debt looks like looks like cost. But the thing is that over time it it actually is a huge benefit. So what you are doing is let's say if you think as your platform and you are on which we'll create new features and deliver them to customers. So if we think as new features as the value and platform as not something which is value and everything we do to platform is just a regular maintenance and technical debt. The word that is which causes the problem here. So I read a book called people where so it clearly said that every effort made into clearing the technical debt is actually a means for high productivity. So essentially quality is free for everyone who is willing to pay a high price for it. So it's like a stock market. So it's not it's not something which would you would get very easily. But eventually when you pay a high price for quality initially, let's say for high to 10 sprints over a period of time it becomes easy for you to be more productive. Like deliver more future features to customer and actually increase the perceived quality answer because for a customer new feature is the quality right that is the value. And the definition of quality is that quality is value to someone who matters and in that in this case the person who matters is the customer. So, so these are my two cents for developer. Just build your contacts in all three areas and ask for having a designated time every sprint to clear the technical debt, even if it means writing more unit and thinking about writing more unit. So when we talk. I'm not I'm not a big fan of, I'm sorry. So I'm saying that I'm not a big fan of matrices as I said, software should not be scientific management and Taylorism software is more, especially the scrum and agile is more about a self improvement and learning by mistakes. So I don't think matrices play any role, any role in that if if I have to ask someone you get a software and I can tell you only one, one character trait of that software, what would that be? So if you if you are, if let's say some there are two people one one person asked me that tell me the defect density of that software right I'll say the defect density is 0.2 bugs per kilo lines of code, which is very less right that would that would count as a perfect software. But the second person asked me tell me the stakeholder satisfaction score of their software. And I say that that is around 9.8. So what does these out of these two matrices what does the defect density matters tell you about their software nothing absolutely nothing right. Whatever these matrices are the burn down charts or the or the scrum velocity and stuff like that they are they are trash in my understanding I'm not disputing anyone else's understanding but I think matrices don't play any role, especially in software. Because we are not into that kind of science yet. I don't know. In when spaceship that will be helpful. No, maybe when machine start writing code. Yeah, so when we talk about developers here what happens when a testers vision of quality is different from the product management. That is very tricky that again depends depends on see if you're one of you either the project manager or the product manager or the tester one of you has a better understanding of all the different all three contacts right. And if you get into an argument on who has the better understanding of that context then it would delay your delivery right. So we have to work together to deliver we don't have to argue together to delay the delivery right so in that case I think by clear definition of becoming a project manager or a product owner. The product owner product managers are the CEOs of the software right for that software or for that thing. So ultimately they are the decision makers I think at that point I don't recommend testers arguing and probably persisting with their my way or the highway scheme. They should probably file file a Jira and let it be. And ultimately let the product manager or the product owner make the decision whether they want to fix it or they want to let it be. I think that's the way I was just saying that I think that's the way that everything should be performed. Yes that's right. And I think it's also important for everyone on the team to have the same context and to have the same knowledge about the product because if the testers context is different from what the developers context is context is. Then you know maybe QE could ask something is critical but the developers think that it's not critical and it just leads into an unwanted argument which leads to a little bit loss of time delay in the release cycle, which is something that we don't want. So it's also important for everyone to have the same context. So that's that's what matters the most. I completely agree. Understanding to be a goal towards the team would actually stick to the perception. And see one more one more undefined unwritten benefit of building context for testers is that when you write a when you write a defect report, let's say you found something while testing an application. There are two ways of writing a defect report right. You know your business well you know your company strategy you know your domain and you know your application you are all three contacts are top notch. Then you will describe that bug in a way which would actually mean a lot of risk for the product and the company right because you know your company direction and you feel that this would this would. So you will probably be more articulate than you were when you didn't have those contacts. Right. So when you are articulate and when you describe the bug report in a way which looks very threatening, then it becomes very easy for the decision maker which is the product manager or a product owner to try agent quickly get that bugs quickly fixed in the same sprint. So why is this by saying that we team we as a team should be more assertive rather than stubborn. We can say that yes I don't think when the assertive is assertiveness is just a hype to be honest. Right it if you are if you are articulate you don't need to be assertive you will just say two lines and that would mean everything to the person who is hearing. Because because yeah crossing that line between assertiveness and authoritativeness and then becoming an obstructionist is very easy, especially in in in dynamic software development teams like us right. Right. What else do we have any other questions. Anyone from the audience has any questions please type it in the chat and you would be happy to answer. If not I think we are only left with five minutes for the session. All right then I think I think it was a great discussion. I hope everyone who does enjoyed it and if you still have any questions and if you if you want to sleep over it and maybe have questions in a day or two you can. You can write those questions in the comment box when this when this recording goes to YouTube or you can probably contact us on Twitter or or any other social media platform. So the slides that has our Twitter hand if you have any questions or any disagreements with what we talked about in the today's session. Feel free to reach out to us and we'd be happy to take the discussion further from there. Thank you all for joining in and listening for us. I think the main key takeaway is whatever the percentage of concentration we have to provide towards external quality 20 to 30 we provide for internal that it will be really great and I think that's the thing we can tell to everybody. Let us know if you feel it's not right or if you want to talk more about it. So much folks for the panel, it was very insightful. And audience if you guys have any questions that you haven't asked already. I put the link to the break room breakout room in the chat. And hopefully our speakers are experts would be hanging out in that breakout room for a little more while. So you can go ask your questions over there. Thanks so much guys. Thank you everyone. Bye bye.