 Hi all, good morning. Thanks for coming and attending this talk. Today, we will explain you how program and tester can be productive. So in this, we will share our experience. Like me and Nancy, we are working from last nine to 10 year on multiple projects. So we face a couple of challenges. What are the things we are doing? What are the principle we follow so that everything will be on the time and we deliver the good product? So mostly, we will share our experience in this, and let me introduce myself. I'm Anu Singhla, working as a principal software engineer in Red Hat. So mostly, I work on the front-end technologies, and apart from my work, I also share my knowledge and educate to the people on the YouTube. Good morning, everyone. I'm Nancy Chan. I'm Quality Engineering Manager at Red Hat, and I have over 11 years of experience in quality, software quality, and as a QE manager, so me and my team, we both are responsible for overseeing the development and implementation of quality control processes, and we make sure that our products reach highest quality and standards. So that's my quick introduction. Now, Anu, we're going to talk about what we're going to discuss today along with why we choose this topic. Okay. Thanks, Nancy. So today, our agenda is why did we choose this topic? What are the top five conflicts, and how do we resolve these problems, and collaboration between Dvalpa and Tester, when the Dvalpa and Tester collaborate with each other, and the couple of case study like we follow in our projects, and the couple of suggestions from our side, and then Q&A. Okay. So now, why did we choose this topic? Like the Dvalpa and Tester collaboration is very important. Okay. So we need to work together, and if they are working together, definitely the productivity will increase, and the customer satisfaction will also increase. Like if we say in 1990 or before the agile, the waterfall model was used. So in that like Dvalpa and Tester has very less collaborations. So like I didn't get a chance to work on the waterfall model, but one of my friend, Deepak Cole, he has shared some experience with me because he has worked on that. So at that time, like the Tester and Dvalpa, like Tester is working in the end, and the Dvalpa working in the starting. So in that, like once a garment comes, so there is one SRS document is prepared on that time. So one is here with the Dvalpa, one is here with the Tester. So there was very less communication between them, and the Tester is working on, like as per the SRS document, they're working on their use cases. But take example, like if in the end, the Tester is testing the bug, testing all the product, and some critical bugs is found, then maybe it requires some infrastructure change. Okay. So at that time, it was very difficult to make the changes because Tester is already testing in the end, and it will delay the productivity or delay the production, okay? So that's why like the agile process has come, and in the agile, like everything is on the collaborating, and we are working on the sprint, like we follow every fourth week, we deploy our changes to the production, and there are a lot of meetings is going on retrospective, stand up or grooming meetings. So it helps to grow and deliver our product on time. So before I jump to the top five conflicts, I would like to ask quick questions to our audience. You guys might work with the different development teams, or might be like interacted with the QE team and program teams. So as per your experience, what do you think, what are the reasons which creates conflicts between programmer and Tester, anybody in the audience? Yes? Yeah, right, I totally agree with your answer. Miscommunication, some like pressure on the releasing because like we always have some deadlines where we need to like release our changes to the production. So if that's the reason like miscommunication, time constraint, pressure, so what do you think like, how does this impact teams' efficiency and productivity? Anyone? Yeah, please? True, you are right. So I can, yeah, so the question is how does this impact the teams' efficiency and productivity? And she said like, yeah, so we sometimes happens where we fix, we found some bug, then we tested that again, it's kind of a loop, right? And I do agree like it will create some kind of empowerment where like we feel like it increased a lot of errors, right? And along with like it also impact the overall job satisfaction as to sit moral. Why? Because we have that kind of like because these two teams have conflicts. So these are the five top causes which like conflicts, which I feel as per my experience working with different teams. So it's not accurate to say like testers and programmers can never become productive of friends, but yeah, there are few challenges, there are few scenarios or situations where the relationship impacted because of the challenges. So the first one is competing goals. So you know like when it comes to the programmer. So programmers and testers, both have different goals and priorities and as per their job, right? And programmers are mostly focused on delivering and developing the software. And on the other side, tester are mostly work on like testing the finding the bugs and make sure that like the software is the highest quality. So I will take a quick example of like how this works like quality versus delivery, bug free versus functionality, functional software. So let's take an example. Testers usually test, they try to find bugs as much as possible. And if they're aiming that like, we will going to have a bug free software. And but on the other side, programmer really wanted to have like release the things, release the changes on production as quickly as possible. If that's the case, if they found some bug, the tester said like, okay, we have one bug which is related to UI and UX, like some font or some alignment is not good because they're prioritizing the customer satisfaction priority, but when it comes to the programmer, they say like, we can take it that later. Let's work on the quickly release. So here the conflicts arise. So next one is blame game. So that is very common blame game. So what it means, let's here I will also give you a real time example about new feature testing. So when it comes to the new work, the new work, like we already have existing system and we are just adding a new feature in the system. So what tests do you usually do? They pick that chunk and test. So when it comes to the new feature testing, we also need to test the regression part. So the regression is nothing, but we're like you're checking that with this new changes, there is no impact, the existing functionality, which is very, very important. And if they found some regression and they reach out to the programmer, saying that that's a regression, they might argue with, no, that's not, it's not the part of, you need to focus on the new feature. Why you are going to the regression? And I'm sure this is not because of my change. So that's kind of arguing, it's very common. So the next, the third one is communication gap. So I like that's a very major point. Communication gaps happen like tester found bug, they write the bug, they lock the bugs in the different machines, we have Jira, bugzilla, and sometimes happen like, they not explain each and every details and information. So how this will impact? Programmer pick that bug and they're unable to address what's a problem because they're not able to, they're saying like it's not working on my machine, right? That's again a very common. So conflict arise because they say, tester haven't shared the exact steps, steps to reproduce so that they can work on the solution. So here the again conflict arise when like, they know no and due to the lack of the information, it delays to the resolution. So the last one is the another one is limited interaction. So limited interaction is basically depending on team structure, depending on the project. So there may be the high chances both programmer and tester teams got let very minimum chances to interact. And if they are not interacting, how they will going to understand the end to end business strategy behind the work they are working on. So they might work on a silo basis if they are working on a silo mode. So they're high chance they have a minimum opportunity to connect. So again, limited interaction also create conflicts. Time conflicts, time conflict, both roles have a totally different priorities and goals which we need to think. And when I told, I already explained like, when it comes to the programmer, they mostly focus on delivering the things beyond time. And when it comes to quality, there may be a chance for, they ask for like, okay, we are not ready and we are not confident to give the acknowledgement to release the things. And they need extra time. So they again conflict arise because tester needs more time to release. So these are the top five causes like which creates conflict as per my experience. And this is very basic scenario. You guys can like relate like we are developer and tester is saying, it's working on my machine and develop, no, no, no, no, no. It's not working, you are wrong somewhere. And the another image is saying like, okay, that's why I'm giving a bug to the developer and developer is saying, no, it's not a bug. It's a feature, it's an improvement. So yeah, so this is how it works. Now, how we can solve coming through these point, a developer, I think it will create a more impact. So Anish. Yeah, so Nancy has shared like a couple of problems. There are a lot of problems between developer and tester. Now we will see how we will solve these types of problem. And like as per me, like developer testers should be friends. They sit together like after the COVID is not possible. Maybe we are not going office, but at least we should go office, contribute, collaborate with each other, okay? Yeah, so like the first step is supportive language. Always support your tester or developer, okay? So like take example, if tester is find some last mint bugs, okay? So don't blame them. What are you doing, man? Why you didn't find in the starting? Why you are coming in then? At least thanks them. At least we find that bug before the production. Or if the customer come up with that bug, it will create a negative impact. So always support them. Second way, okay? And one of James's best testing guru who tell always like tester don't hate developers. And it's like a means the wife tell to her husband before going out. If there is some stain on his shirt, so he will not feel embarrassed. Something like that. So developer and tester roles should be productive, should be friends. They know each other. They know their strength and weaknesses. Or take example one more like if the tester is not able to reproduce an issue. But help them to reproduce the issue. Give some system logs. Because as a developer we know what are the input. If we provide that input maybe it reproduce the issue. So provide that information so that they find the issue and they can look the issue, okay? So second one is defect triage meeting. So we follow this like every week the test, the bugs will come from any places. Like that the test are from developer, from multiple players, from customer also. So every week we do one meeting. In that meeting we discuss what is the priority? Either it's a bug or not. Or what are the solution? The basic things we just provide that in the, in the ticket also and the story points also. Like the defect difficulty level update bugs. And the code freeze. It's very important guys. Like means if, like tester is, like tester is a human. He don't test each and every part of the app, okay? So if we are pushing our code till the end, like we have the production which tomorrow and we are pushing our code till now. Then how the tester will test each and every part? Because they need to test a regression like each and every part. Like right now we have the automation we can do it. But still we need to do some manual testing also. And like what we follow is we have the three, like every four week we do our production push. So three week we are pushing our changes to the, pushing our changes in the testing environment. Last week we give full week to the tester to test each and every part. But we give like if they find any critical issue, just we fix that issue and push on the testing environment. Apart from this full week we give to the tester to do it. And next one is the resource alignment. Like in IT word resource, resource currency is very normal. Like somebody is joining the company, somebody is leaving the company. And the most standard ratio for developer and tester is like three ratio one or as per the product, as per their experience. But mostly this is the standard ratio. So sometime if there's some resource crunch, so always like as a tester push, as a developer, push less code in the production or in the testing environment so that they will get a time to test the things or like help them like as a developer we can test other developer code or the second one is also test our code. So we are also follow these things. Sometime is the, we have the little bit crunch in the resource. So the part of resource alignment, I would also like to add the resource balance in the both developer and QE ratio is very important. Why? Because of like if we don't have a balance team like for example we have five QE, five QE is on one developer. No, not good, right? And on the like vice versa, five developers, one QE, a lot of pressure, a lot of pressure to work on the things. So being like as a QE manager, I always sit together with the project team and decide like how the ratio should be. So when we are talking about the industry ratio about the dev versus QE, it should be like either three ratio one and two ratio one depending on the project, the complexity of the project, the timelines and the like what we committed to our stakeholders. So always like don't go to your manager, we have a crunch and just we need to help, like manager can't do anything, right? We have the crunch, so we need to manage ourselves, okay? And the pair program and testing, like I follow this one, like I always try to share my knowledge, architecture knowledge with the tester because if we share our knowledge, definitely they will not come with the small, small bugs like the data issue or some like the, if some field is not showing in the UI, then they will not come to us. They analyze from their own self if we share our knowledge with them or like they can directly ping to the backend team, I'm not able to see this field, please take a look. So it save a lot of time between developer and testers, yeah? So now we have seen like how we will solve these problems. Now at what time the developer and tester collaborate more each other, okay? So like the shift-left testing and in the shift-left testing, like always try to involve that tester in the initial phase, in the initial phase, so that like once the requirement comes, so we analyze, we figure out, okay? How we will deliver that product? So at that time, if we involve our tester, then like tester is feel like a customer, okay? They think like a customer, how the customer will use this product? So in this way, if they provide their suggestion, then as a developer when we are designing our architecture, then they share our knowledge, okay? And we use their knowledge, we use their use cases while designing the product or while designing any feature. So it helps like and if we are thinking like all the scenario in the starting, then it will definitely will help us to grow in the future also, if the new things come. And defect reporting and regulation, it's a normal thing like always the helper say, I'm not able to reproduce this issue. So like when the tester is reporting the bug, then definitely provide all the things like attach the video, attach the screen, so provide the proper steps. So that like when the developer start working on this, he has some information, okay? So I need to do these things and I'm able to reproduce this issue. At least he will not come to the tester, oh man, I'm not able to reproduce this issue. What can I do with that? So it save a lot of time between developer and tester, okay? So next one is regulation, recognition and appreciation. So like in Red Hat, we always recognize anybody if somebody help us. Like in Red Hat, we give some reward points, some and thank you note, so that if like the next person will feel satisfied, like he help us and we recognize him, something like that. Yeah, so now we're going to discuss about the different case studies. So before I will discuss the case study, I would just like to call like these case studies are the result, we are actually actively, these are the real time scenarios, which we are actually using in our teams with the helper and collaboration from like a development team along with the program team. So the first one is, okay, the first one is implementing a CI CD pipeline, which result improving the software quality. So I will go into, as a part of this case study, I will going to discuss like how these two roles actually helps like how the tester can help the development team during the development phase. So the first one is like, you can see like how usually developers works. So developer build the work, they usually work on the requirements they build and before, so how the team testers can contribute during the building or during the development phase with programmers, they can like add a automation suit. So automation suit is nothing, but it's a smoke suit, which give us a confidence like the basic functionality or basic critical functionality of the software is working fine. So what we will do, we will hear like when the developer is building their code, before like merging into the master branch, at that time, we will integrate our automation suit with the unit test. So along with the unit test, our automation suit will going to run. If it's working fine like the basic functionality, which we added in a suit is working fine, it will ready to go with the merge request. Otherwise, it will detect the problem and share the report with the developer. So that in the early phase, developer can take a look, what are the problem or you can consider, what are the regression with this new change? And it will also solve the time tested. They don't need to like much worry about the basic scenario because we have already tested as a part of the building process. So the next one, which we have is like cross-functional collaboration for a comprehensive testing. So comprehensive testing is very important. Why? Because we in agile module, like we do like quick changes, move that changes to the production. So how we can achieve cross-functional collaboration? So we usually conduct the bug bash. So bug bash is an event where like we invite different teams, different representatives from the development team, program team, as well as different QE teams. So they keep their normal, they set aside their normal jobs for some time and they start testing on a piece of the thing which we have planned to release. So for that, we can take an example of a e-commerce site which they are planning to release the major changes in the production. So we will conduct a bug bash event where we are like representative from the development team, QE team come together and doing random testing to make sure that like to find the bugs or testing different scenario. For example, shopping cart workflow, payment workflow to make sure that like we like give a, cover all the scenarios related to the UIT. So at the end of this event, what we will do, we will call out few winners on the basis of the defect quality, their severity and the details which they have shared. So it could be like you can, every organization have some mechanism to appreciate that it could be rewards points, it could be some gifts. So this is how you can encourage the cross-functional collaboration. So the next case study we have is about the actively participation of feedback loop, like how you can involve tester in during the feedback loop. So customer satisfaction is very, very important when it comes to the software development. And if we act on the customer feedback on timely basis, we can increase the overall customer satisfaction as well as the product quality. And as you know, testers, QEs are known as a customer advocate. And if we are involving them, it will definitely we're going to have a very good impact on the, as a feedback. So let's say here, let's take example, every team, every software had their different mechanism to gather the customer feedback. It could be an email directly, it could be a form attached on the system, it could be a account time who helping, who connects with the customer and customer directly share the details. So we gather all the feedback in a one space and we usually have a monthly base meetings where we discuss the overall the feedback which we receive. Now on the basis of like then developers and testers both contribute and share like, what are the things which you need to prioritize? So we log the tickets in our ticketing mechanism and testers start working on it on the parallel side, development will going to work on the fixing the things and testers will start working on the test plan. And then they will deploy a test and inform the customer like, okay, these are the feedback we receive and we fix this and now it's ready to use. So this is how testers can also contribute as a part of feedback loop. So I'll hand over to Anuj to talk about the overall development and feedback. So Nancy has shared like all the three principle which we follow. Guys, if you are not followed, try to follow these things. Like the CI CD it's very important like means before pushing our code changes we try to like test the things so it will not create a regression. So we follow the same feedback loop is very important. We follow these things, okay. So now we will discuss like how we deploy, build and deploy the features, okay. So this is a full diagram I try to make and try to explain you guys. So how we are doing, so how we are involving different people like the tester, developer, UX engineer or the PM level, okay. So like the first one is requirements. So requirement come from many places. PM level, analytics level or the customer, okay. So after that like we analyze the requirements. So in that like the all the team members like who are tester, developer, UX engineer or the PM level managers, everybody is there. So in that we decide, okay. Either we need to do these things or not, okay. We are doing these things. Then what are the priority of that or how the customer will use these features. So the tester is giving their suggestion. They think like a customer and then like as a developer we think, okay. So either it's possible or not or the backend team also decide, okay. So how we will deliver or how we will give to the UI this some if we need some backend support or some data from the backend. So in this way like we analyze each and every part. So once we figure out, okay. Then we go for the build, build the feature. The UX will give us the UX design and then we build the feature. And after that we like the tester things and like in the testing phase we are also helping the as a developer we also help the tester to test the things share our knowledge. And then after that like the customer demo. So after that like before going to the production we give the customer demo and like if it's a critical feature or the new feature we give some UAT like some give some pre-prod environments and give some URL so that they can do some UAT testing. And then after that like deploy the features and we continuously take a feedback from the customer. It's very important. So like in this way like we try to build our product and try to build the quality and improve the quality and all things. We follow this role, okay. So guys like these are the couple of suggestions from our side to all, okay. So try to build a network with all the tester and developer means try to go for offline meetup try to go for conferences like I'm here. So I meet a lot of people like in this way like we get to know okay how the people other people are using the technology different products and all things. So continuous learning like it's a very difficult like as a I'm a front end developer every day coming new things the new library come like there are a lot of framework in the UI. So it's very difficult to learn the new things or continuously learning but try to update whatever tool you are using try to things okay maybe means try to the new feature is come try to think about okay the new feature come learn the new things. So it's always like always try to update yourself and be kind in stress so like in IT the stress out is a normal thing right. So we have a lot of work to do and but try to come out take a break it's always a good thing don't starting blaming just take a break it's a bad thing or take a feedback in a positive way I always follow these things. So I try to take a feedback from multiple people if somebody give you the negative feedback take is a positive because thanks them at least he give you the negative feedback and he finds something bad in you try to improve that and definitely in the end you will feel yourself as a positive way. Make personal brand like means I follow this rule means try to write a blog like I try to educate try to educate via YouTube. So let it web development but means try to do these things and means it help you if it helps a lot means definitely. So with that this code we will wrap up our talk. So the code means that testers and programmers should work together rather than or instead of like competing with each other by sharing and collaborating there's knowledge the skills they can create a better and productive or good product. So thank you thank you any questions. Okay so the question is like how you are maintaining the quality while pushing the changes right on the QA. Okay so like this is diagram if you see here what we are doing is when we are merging our changes okay when we are create our MR okay. So in that time once build the code everything is fine then we run this unit test and the automation test. So tester give us one URL like tester also doing like running their automation test right. So in the GitLab we are also running their unit test on our GitLab. So in this way like suppose you have make one button change okay. So we are 100% sure it this button will not create any regression at least in the existing or the critical functionality or the unit test we are also follow this rule like every time we try to write the unit test otherwise we don't merge any code. So this is like we follow this rule. And in this way like once everything like at least the existing functionality is working fine then we go for the merge request and then deploy it to the testing environment. And then tester will also do their own testing manual or whatever they want to do and after that we go for the production. So this is way like we follow. And if you see here in the red sign problem detected in the unit test like if there's some issue in while running our changes then we go to the developer will take a look it's either because of me or there is some issue in the other unit test or the automation test. So we connect with the tester and in this way like we follow this rule. So as of like in this section like you know like when we execute our automation run right we do have some report which explain like what are the things is not going well. So developers can take a look because we have already shared all the workflow and all the framework with them along with their like integration with the unit test report. Okay the question here is like how we spread this collaboration across time because they have high chances they are teams which not following this right. So you know like we usually have a flow where we have a QE team or development team and program team. So we usually have one monthly or like monthly or like twice in a quarterly meetings with the leads. So over there we usually have a bidirectional discussion and feedback on the QE team as well as development. So like as a QE manager I have one on one with my team members like to see if they do have any feedback for the developers because they're open to share. Like they can share the feedback if they are facing any challenges. So they discuss with me and because I don't want like their developers and Qs directly have some some relationship conflicts. So I act as a manager act as a moderator and they connect with the development manager as well as program manage to make sure like this is what we are facing like how we can fix how we can fill this gap and vice versa if development manager have any feedback like okay we have a resource crunch or last release we got a critical bug on production so how we can like improve this in a better way. So being a leader so we discuss internally and make sure that like it should be a bidirectional call. So we usually keep like I usually follow the monthly connect why because we usually have you follow the agile and or Kanban which has like three weeks spent or two weeks spent so this is how we actually work. And second way like we have the retrospective meeting. Okay so if something is not going good so we discuss in the in that retrospective and discuss like provide some points and action item also we followed that. Okay so we will take care in the next time. Okay so like first time mistake is a normal thing but if you are repeating the things that's a bad thing. So try so the retrospective is a good thing and we have the weekly grooming meeting daily stand up. So we follow these things and at least like we are maintaining the quality and all things. Anyone else I think you are raising your hand. Okay anyone in the back. Anyone any other question. I think like how do you also expect. Okay so I'll give my I will share my experience maybe Anuj will share. So the question here is like how we can encourage peer program your peer testing. So I'll give you few years back me and Anuj were in one team and just like before the release I found very critical bug. And when I found like it's like two or I think next within a few hours we were supposed to really I was just doing the smoke testing. And when I informed Anuj said okay Nancy come at my desk I'll show you what is going on. So he explained me the code like Nancy and he's told me like Nancy you sit and you fix it. I was like how I can fix it that's your job. So he said like it's just a mistake of full stop. You just we miss that that is why it's a blocker is coming. So I sit on his desk. He explained me the whole code and I fix that bug. So this is how you can encourage. So if it's coming like if developers should like explain like how they're very small small tips like we also encourage that culture in a team like they explain like how the console and how the network logs tester can understand right. So because if there's something is failing without directly logging up sorry bug but we usually do which firstly check console is there any errors going on? Is there any what's the network locks? Because there may be a chance. It could be a internet intimate issue right. It's you have a like network connection problem that is why it's failing. So just like learning these things and collaborating with developers it helps a lot. So yeah. Okay. So I follow this thing like I always share my knowledge. Okay. So somebody tester come up. I'm not able to see this field. I ask him have you think about this? Why you are not able to see these things? So have you checked the network call or are you getting any console error? So he think like this and then I explain him. Okay. We are doing some these things and that's why you are getting this issue. So at least in the future he will not come up with the same question. And the second way we follow we like every month we have the one guilt meeting. So in that meeting like everybody share their own experience like something he if somebody is did something good. So like in that meeting he explained okay. I did these things and everybody is in that meeting like the developer tester. So this is also the second way we follow these things. And the third way we try to like automation test right. So we help them maybe I have some bandwidth one day or five hour, 10 hour then we told them okay could you please give us one automation test maybe I help you or we review their code also. They review our code but sometimes they are not able to do but we ask them okay is there any scenario is pending from our side? So we can write the unit test for that. So that it will not come up with the because they think like a customer they have a lot of scenario we don't think like that right. And in this way like always help them to decide the framework which framework is good and what are the benefits if you are doing these things best practices there are a lot of things like we follow. This is just my suggestion guys. Thank you.