 Thank you everyone for joining in today, Christine Jack Boney is with us, who is going to talk about helping your whole team own software quality without any further delay. This stage is your Christine. Thank you and thank you to Agile India for having me. So I am here today to talk about helping your whole team own software quality. So let us start with why the whole team needs to own quality. So of course everybody would like to release software of high quality and release software quickly and the old model of throwing things over the wall for QAs to test has not proven to be a very effective way to make sure that that releases happen quickly and that they are of high quality. So I am going to share with you, oh actually before I share with you, I will talk a little bit about why it is important to have developers get involved in testing and test automation. So I am going to be sharing with you a couple quotes from some books I have read about just why it is that important to get those developers involved. So first is from the book Accelerate by Nicole Forsgren, Jez Humble and Gene Kim. If you have not read this book, I highly recommend it. Basically what they did was they surveyed a number of software development companies. They asked them about their software practices, how they coded, how they tested, how they released and then they also asked the companies to self evaluate the quality of their releases, how frequently were they able to release, did they have roll backs, bugs, that kind of thing and then they correlated the behaviors with the results and they came up with a number of factors that could predict IT performance or success of releases. So this is what they had to say that the practice of developers creating and maintaining acceptance tests was actually one of the practices that predicted IT performance. So here's the quote. Developers primarily create and maintain acceptance tests and they can easily reproduce them and fix them on their workstations. It's interesting to note that having automated tests primarily created and maintained either by QA or an outsourced party is not correlated with IT performance. The theory behind this is that when developers are involved in creating and maintaining acceptance tests, there are two important effects. First, the code becomes more testable when developers write tests. Second, when developers are responsible for the automated tests, they care more about them and will invest more effort into maintaining and fixing them. So let's take a look at another quote. This is from Clean Code by Robert C. Martin and he is talking about the importance of having high quality test code. Having dirty tests is equivalent to, if not worse than, having no tests. The problem is the tests must change as the production code evolves. The dirtier the tests, the harder they are to change. The more tangled the test code, the more likely it is that you will spend more time cramming new tests into the suite than it takes to write the new production code. As you modify the production code, old tests start to fail and the mess in the test code makes it hard to get those tests to pass again. So you can see here that test code is just as important as production code in terms of trying to release your software quickly. So these quotes illustrate why it is that having the whole team on quality is so important. So let's talk a little bit about the Paylocity story. Paylocity is where I work and it was founded in 1997. It's a cutting-edge human resources and payroll tech company. We were listed in Glassdoor's best places to work in 2019, 18, 17, and 14, and we are a fast-growing company. In the last six years, we have grown from 10 development teams to 50. So you can imagine that in recent years, we've experienced growing pains because 10 development teams can communicate with each other a lot more easily than 50 teams can. So what we found was it was hard for teams to maintain lines of communication. It was hard for teams to remember which code was dependent on which code when different teams were owning different sections of the application. And so to address some of these issues, we decided to develop a quality initiative. And this was about 18 months ago. And here is what we did. We began by selecting quality attributes. This was done with myself and one of our directors. We came up with some quality attributes and then we pressure tested them with senior managers and directors. And then from there, we put together a small team involving a director, some managers, and some software testers. And we created what was called the quality maturity model. And this was a list of behaviors that we were expecting teams to do to help ensure that we were releasing software with quality. And in the quality maturity model, we had minimum behaviors, standard behaviors, excellent behaviors. And I'll be talking more about those different levels and some of the example behaviors in just a minute. Once we had developed the quality maturity model, then we introduced the quality maturity model to teams. We said we've got a quality initiative. This is what we are going to do to advance quality in our product and tech team. And then we created a group of advocates among software testers who are going to meet monthly with each team. And I'll be talking a little bit more about that later. Then we asked each team to create a quality strategy. Quality strategy was like a contract that each team would create together that basically says this is how we're going to do quality together as a team. And the quality strategy can vary from team to team and we'll be going into that in more detail a little later. We also encourage teams to create quarterly goals so that they could try and meet some of those quality maturity model behaviors during each quarter and advance in their maturity. And we encouraged teams to create stories for achieving those quarterly goals because we discovered if they wrote the goals down and then forgot to look at them for the whole quarter, they would get their goals done. So we encouraged them to make sure that those stories were on their JIRA boards. Then every quarter we had the teams complete a self-assessment to see where they were in the quality maturity model. Those results were shared with all of the teams at the end of each quarter. And teams who reached a milestone of quality maturity model behaviors were celebrated publicly. So what kind of results did we get with this after the first year of adoption of this strategy? Well we had 65% of the product teams were able to achieve 100% minimum status, meaning that they are executing all of our minimum quality behaviors. 18% of product teams were able to achieve 100% standard status. And then just a couple of anecdotes here. We had one team where the testers really involved the developers in how they taught them how to test software, how to come up with test plans, how to write automation. And in this particular group, there were two testers. One of them needed to miss a sprint because of a planned outage. And the other one had an unexpected family event and had to miss the sprint also. But in spite of the fact that they were missing both of their testers, the team was able to complete all of their deliverables, including doing all the coding work and all of the testing and all of the automation because they were so well trained and they were able to work together. In another anecdote, we had a team that was able to cut their speeds to deployment in half. Instead of deploying once a month, they were now deploying every sprint, which really helped them iterate very quickly. And on a lot of teams, developers and testers worked together to improve the speed and reliability of their test automation, which has been so helpful. So what's next for payload? Well, we're continuing to meet with teams monthly. We're continuing to encourage them to set quarterly goals to adopt new quality maturity model behaviors and add those related tests to their sprints. And we've also created a quality dashboard that is tracking the results of our new behaviors. So things we're tracking are monthly average users. Obviously, we want that figure to go up. We're tracking percentage of releases that result in bugs or rollbacks. We want that number to be very small, of course. We're tracking percentage uptime. We want that to be as close to 100% as possible. Average response times, we want those to be very quick. Number of security issues found, we'd like those to be zero. Number of customer issues found, we'd like that to be as low as possible. And finally, frequency of deployments, we'd like to be able to deploy as frequently as possible. So now that we've seen what Paylocity did, I'll dig in a little bit more into each section of our quality initiatives. So first, we will start with attributes of quality. So we discussed what we thought made a quality application at Paylocity, and we came up with these seven attributes. First of all, it's valuable. It meets the customer's needs. It's functional, meaning it does what we say it does. It's secure. It protects customer and company information. Reliable. It's available when needed. Performance. It responds within an acceptable timeframe. Usable. It's easy and intuitive to use. And finally, maintainable. It is easy to test, deploy, automate, monitor, update, and scale. So we got the idea for these attributes from this great blog post that Abstracta put out. It's called the Software Testing Wheel, and they talk about these different areas of quality. There were a couple others on that wheel that we didn't use, but you might want to consider using on your own team. Compatible. It is compatible with the chosen operating system and browser. And portable. It can be moved from one environment such as an OS to another. So now that we have determined what we wanted our quality applications to look like, then we came up with the quality maturity model. So first of all, we decided that we were going to have levels of behavior. So for minimum, these were the behaviors that we expect that all teams are going to exhibit. This is sort of like the baseline. For standard, these are the kinds of behaviors that a mature team should be exhibiting. And finally, for excellent, these were items that we were expecting that teams would do to go above and beyond the regular expectations. So we were never expecting that all teams would demonstrate all excellent behaviors. They're just suggestions for really how they can take things to the next level. So let's take a look at some of the example behaviors that we have in our quality maturity model. So we have anywhere from, I would say, three levels or three behavior items to 12 behavior items for each of these facets of quality. So for valuable, here's an example for minimum, the team regularly reviews customer issues and creates stories to resolve them with appropriate priority. And then for standard, we have teams make documented progress each quarter resolving the high priority customer issues. So as you might expect, our customer issues are ranked in priority. So we'll have critical, high, medium, low. So for standard, we're expecting that each quarter, those high priority customer issues are definitely going to get resolved. Of course, the critical ones would have already been resolved because they were critical. And then in excellent, we're expecting that teams are making documented progress each quarter resolving the medium priority customer issues. Here are some example behaviors for functional. So for minimum, team has an automated test suite that is triggered manually after each deployment. For standard, that automated test suite is triggered automatically after each deployment. And then for excellent, we have automated test results trigger a post in team communication channels. So for example, if the automated tests are triggered, and there's a failure, there's going to be a post in let's say a Slack channel that says, hey, you've had a test failure here. Here are some example behaviors for reliable. So for minimum, team uses monitoring systems to monitor their health of their application. For standard, the team is integrating monitoring systems into health checks and alerts. And then finally, for excellent, the team has created a dashboard that's going to actively monitor the health of their system that they could check at any time. Some example behaviors for performance, for minimum, team executes load testing on their product. For standard, team creates breakpoint tests to determine the load limits of their product. So in other words, they're going to figure out exactly where it is under what load their product starts to fail. And then that way they can set good expectations for their users and for the product. And then for excellent, team achieves the lower bound service level objective for their product. So if a team, let's say they had an SLO where they wanted to respond to requests between 200 milliseconds and 500 milliseconds, that lower bound level would be the 200 millisecond, and that would be what they would be aiming to achieve for excellent. Here are some example behaviors for secure. And as with our customer issues, we rank our security vulnerabilities by critical, high, medium, low. So for minimum, team has zero critical security vulnerabilities. For standard, team has zero critical or high security vulnerabilities. And then finally for excellent, team has no security vulnerabilities at all. Here are some example behaviors for usable. For minimum, team is aware of the usage of their product in terms of browsers, devices, platforms and OS. For standard, team is testing on all the browsers and devices supported by the product. And then for excellent, team is running automated tests on all of these supported browsers and devices. And finally for maintainable, for minimum, team has a documented build process for their product that is understood and executable by all software engineers on the team. For standard, the build process is executable by all members of the team. That includes the manager, the product owner, and so on. And then for excellent, the build process is executable by engineers on any team. So for example, team A would be able to deploy team B's software. So now that we've looked at some example quality maturity model behaviors, let's take a look at the quality strategy. So as I mentioned, the quality strategy is like a team contract that the team develops together that says this is how we as a team are going to deploy software. This is how we're going to test software. This is how we're going to find done. So to create a quality strategy, here are some example questions that a team might want to ask themselves. Who grooms the stories to get them ready for development? What does done look like for each story? How will a story be handed off for testing? Who will create the test plans? What tools will be used for manual and automated testing? Who is responsible for maintaining the automated tests? How are bugs handled when found in testing? What kind of testing will you do before a release? And how will you monitor the health of your application? So we had all the teams create their own quality strategy. And as you can imagine, there was a lot of variation from team to team, because we have some teams where they might have three software testers on the team, and let's say three developers. And there might be some teams where there are six developers and only one software tester. So the kinds of ways that they answer these questions are going to vary from team to team. So I'm going to share with you three different examples of snippets of quality strategies for three different teams. So here's the sample team strategy number one. Keep in mind, this is not their entire strategy, because that would be hard to put on the slide. So this is just one section. This is their definition of done. So they have four things here. First, they have the story is fully defined, acceptance criteria defined and clear, testing requirements are clear, estimation of effort has been done. Then they have testing, unit integration in UI, regression testing of dependent areas, acceptance criteria met, and performance and security testing. They also have fixed version is updated before story is moved into testing. And they also have the user story needs to be signed off by either the product owner, who's reviewing the acceptance criteria for completeness, or the software engineer or tester for stories that have little UI. Let's take a look at another one. So for this team, one section they had of their strategy is their testing approach. So they start with the product analyst and software test engineers meet after sprint planning to write test cases. Test cases are written while development is in process. This particular team is using test rail to write their test cases. When a story is ready for testing, the developer will deploy the story and conduct initial testing. The software testers will execute the test cases they wrote and then log bugs for any failed tests. The product analyst will sign off on the acceptance testing. Bugs are retested and closed before the story is done. And finally, when the story is completed, it will be added to the automation suite. And then here's a third and final sample team strategy. This is what they called pre-grooming. Stories are pre-groomed with these four roles present, manager, product owner, dev lead, and senior software test engineer. The purpose of pre-grooming is to take an existing story and define these two key elements. Functional AC. This is a complete and unambiguous description of the functionality that the story will deliver. And ideally, much of this has already been defined by the product owner. So the focus of this pre-grooming is on technical details. How are the developers actually going to deliver what it is that the product owner is asking for? And then they also have quality AC. This is the quality and testing work to be performed to assure that the functional AC are met and that the story is delivering a high quality result. Okay, so now that we've taken a look at some quality strategies, let's dive into how developers can get involved. Here are some great ways that developers can help out with owning quality. First of all, they can test their work before handing it off. Most good developers are already doing this, but there are some who still throw things over the wall. So it's always best if the developers can do some initial testing, find as many bugs as they can before handing it off to the testers. They can create test harnesses. I will use an example of this. I was on a team where we were creating a notification service where if an event happened in our application, it would trigger a notification that would send an email to the interested parties. The developer who created this notification service knew that it was going to be very time consuming to test if the testers had to go into the application every single time and do all of the steps necessary to trigger the notification event. So what he did was he created a simple API that was internal just to the team where the testers could trigger the email as if the event had really happened, and that made testing so much easier. So that was extremely helpful for everyone. Developers can also add automation IDs to web elements. This is so helpful for people writing UI automation to be able to locate those elements. They can create automation test frameworks. Developers are really good at knowing how to structure code, and so we can use their knowledge to help create test frameworks that make sense, are well organized, and then it becomes easy for everyone to contribute tests. They can also contribute to existing test automation, and a great example of this would be if a developer is working on an existing feature, making a modification, and the modification is breaking an automated test case, they can fix that test automation while they are coding or adding the modifications to the new feature. That's extremely helpful. They can also participate in test automation code reviews. Oftentimes there will be teams where one or two testers are working on test automation all by themselves and nobody else is ever looking at the code, and this is not always best for clean code practices. Developers usually review each other's code, and it's important to have the test code be part of that review process. Then you can make sure that you've got clean production code and clean test code. Finally, developers can participate in release testing, doing some testing before a feature is released. When developers do this, they get to know the feature much better, which makes them better coders. Let's take a look at a couple of success stories that we had with developers getting involved at Paylocity. The test automation tool we are using right now for our UI and some of our API testing is Cypress, and this was selected by developers because they preferred Cypress over our previous framework, which was spec flow, because it was easier for them to use. Since they enjoyed working with it, they helped create a test framework, and they started adding their own tests rather than expecting the testers to write them all. And then another team, this was extremely helpful. Developers and testers made it their standard practice to do exploratory testing sessions together before every single release. And working together, the developers became better testers. They knew what to look for better, and they found bugs quickly and fixed them long before they got into production, so that was extremely helpful. So now let's take a look at some common objections that you might run into if you decide to start a quality initiative like this one. So one of the most common objections you will get is, it's not my job. Developers are often under a lot of pressure to deliver code very quickly, and many of them will say, it's not my job to do testing. That's the testers job. But what we want to do is we want to move to a, we work together philosophy. And here are some of the ideas that you can use in conversations with them to help get there. First of all, bugs can be fixed much faster when found in development than in production. So if a developer is working on a certain feature and bugs are found while they are working on that feature, it's much easier to fix as opposed to when they declare the feature done, it goes to production, the developer starts to work on something else, and then a couple weeks later a bug is found. Now that developer has got to go back into the code that they were working on two weeks ago, try and remember what it was that they did, and then try to fix the bug. So you can see when you're fixing the bugs sooner rather than later, you're going to be able to move on to things more quickly. Higher quality releases mean less time troubleshooting customer issues. So a lot easier to fix that bug when you're right there in code than a month from now when the customer is having a strange issue, and now you've got to try and tease out what happened to get there. When both developers and testers fully understand how a feature works and how it fits into the larger application, story ping pong happens less often. And if you don't know what story ping pong is, I'll give you an example. Developer finishes a story, tosses it over to the tester for testing. The tester says, I don't think this feature is working right, kicks it back to the developer. The developer says, oh no, you don't understand how this was supposed to work. It is supposed to be like this. Kicks it back to the tester. The tester starts testing again. Then the tester finds something that the developer forgot to do and then kicks it back to the developer again, and so on and so forth. So we want to try and avoid story ping pong as much as possible. And when developers and testers are working together, both in the planning and in the development and in the testing, that happens less often. Creating test harnesses for testers means that testers can validate new development work more quickly. The faster the testers can finish their testing, the faster everybody can move on to something new. And developers like to be able to move on to new work. And finally, sharing tasks such as software development across the team means that no one person becomes a bottleneck. Probably everyone who's ever worked with a team has seen a situation where there's one person that's the bottleneck because they're the only person that knows how to do something and everybody else has to wait for them to finish. When you're sharing tasks, this happens less. So everybody is moving forward together quickly. Here are a few other common objections. We don't have time to do this, but really do you have time not to do this? The quality maturity model activities will result in faster feature creation, more frequent deployments, fewer rollbacks and fewer customer complaints to investigate. So investing a little bit of time upfront will mean that you will be able to move so much more quickly later. This behavior doesn't apply to us. And this may be true. At Palosity, we have some teams that are very, very UI heavy. And we have some teams that are completely back end. And so they've got no UI at all. So for those teams, a behavior like we're making sure that our product is accessible to all users might not apply to them because they're just working in the back end. So that's fine. If some behaviors don't apply to the team, then you just tell them to just ignore this one and keep on working on the other behaviors. I don't think this behavior should be in the QMM or you forgot to put this important behavior in the QMM. Well, we make revisions to our quality maturity model every six months. So that way, if there's something that we think we've put this behavior in and we think it's not really relevant enough to all of the teams, so let's take it out. Or I think we forgot some very important thing that should be in the QMM. We can make those revisions so that the document is devolving with us. And finally, you might hear from testers, if the developers test, then I won't have anything to do. This is not true because there will always be more testing to do. For example, if developers are getting involved with testing and they are finding some good functional bugs, that frees up the testers to do things more like performance testing or security testing or accessibility testing, just improving the quality of the product. And testers are best suited for thinking of all of the things that should be tested and for prioritizing which things should be tested over other things, which things should be tested first, what should the release testing look like, and so on. So developers are not going to be worked out of, excuse me, testers are not going to be worked out of a job. Okay, let's take a look at measuring success. So here are some success metrics that you might want to use. Number of rollbacks required after a deployment. Number of customer support tickets logged. Number of daily, weekly, monthly active users. Percentage of uptime. Average response time. Security issues found. And time between each deployment to production. Here are some metrics not to use. Number of bugs found. This is not an indication of quality of software because to say, oh, we found five bugs when last release, we only found one, doesn't really mean anything because the bugs could have been logged in a different way. For example, if testers were being rewarded for the number of bugs that they found before release, a tester could sort of gain the system by saying, oh, well, this button isn't rendering properly on Firefox. And this bug is not rendering properly on Chrome. And this bug is not rendering properly on Safari. So they could log three bugs when really there was only one bug found. Number of automated tests is also not a good metric to use. You could have 10,000 tests and they could all be testing the wrong thing. Or they could all be flaky and have frequent false failures that mean nobody is using or trusting the tests at all. Or you could have tests that take so long to run that they are not providing fast feedback to the developers, which means they're useless. So think about the quality of your tests and think about whether your tests are actually testing the right things. And of course, lines of test code and lines of production code are meaningless. You want your code to be as clean as possible. Usually that will mean having fewer lines of code. So and that's true for test code and for production code. So you can't say, oh, well, this developer, this tester wrote 30 lines of code today. Therefore, we're doing a great job with our quality. They do not correlate at all. Finally, let's take a look at some communication strategies. First of all, communicate three times at least as much as you think is required. You will be surprised at how many times you're communicating about what's happening with your quality initiative. And you think that you are repeating yourself over and over again. I know people must be sick of hearing about this. And then someone will come up to you and say, well, wait, what's this quality initiative thing? I haven't heard about this. So make sure you're communicating a lot. Make sure you're communicating through multiple channels such as blog posts, meetings, chat and email. There are some people who pay no attention in meetings. There are some people who never read their email. So you want to make sure to reach them through as many channels as possible. Make sure to communicate up. Let your managers and directors know how things are going with this initiative and ask for support if needed. A lot of times in big companies, there will be new initiatives started that the managers and directors are very excited about at the beginning. And then after a couple of months, they start looking at another initiative and they forget all about your initiative. So make sure that you're constantly communicating with them about what's happening so that you keep that momentum going. Share successes with everyone in group wide announcements. This does two things. One, for teams that are not really participating, it provides them an incentive to participate so that they can get that attention too. And two, the teams that are having those successes feel happy that they've been praised publicly and they want to keep on with that momentum and keep on achieving. Check in regularly with your teams to make sure that they are continuing to work towards their quality maturity model goals. If your company is large, gather a group of people to help. We have a group of people who are all software test engineers called the category quality leads. And each of those people meets with between three and six teams a month so that they can check in with them and say, how are you doing with the quality maturity model? Do you have any questions? How can we help? Listen to feedback. People will tell you whether or not they like this initiative and you'll get some who do and some who don't. But even the people who don't like the initiative might have feedback that is useful that can help you adapt and make it more successful for everyone. And finally, let people know when changes are coming. For example, if you make a change to the quality maturity model, let them know maybe a month ahead of time so that they know there's going to be some new behaviors or some behaviors that are being removed so they don't feel like that bar that they're trying to achieve is constantly moving. And that is all I have for you today. So are there any questions? Oh, here's a question. Does this initiative have to be applied in a company-wide scenario or can you even start with one team? Yeah, you could absolutely start with one team. And that one team could be a great example for other teams. You could start by creating that quality strategy and saying this is how we are going to do quality on our team. And then if you wanted to, you could even make a list of behaviors like the quality maturity model behaviors. I think that's a great idea. So other than the resentments presented in the deck, are there any more practical resentments that you would kindly share? Okay, so I think you're talking about the objections here. So I think there's always attention in companies where there are some people who would like to like to have more top-down commands like you must do this. And then there are teams that say we don't want anybody to tell us what to do ever. So we've sometimes seen a kind of tension with that. I asked for feedback just on my own performance because I'm the one who's been driving this initiative forward at my company. I asked for feedback from people and it was funny that there were some people who said the quality maturity items should be more specific and more required. And then there were some people who said the quality maturity items should be less specific and make them easier for teams to do. So I think with every big group of people there's lots of differences of opinion. All right, I mean our time walks is about to that is one question. Any strategies to mitigate those resentments? Yeah. Yeah, that's the hard part. I think the most important thing is to make people feel like they're being listened to. They say, I hear you, I understand what you're saying here, and then try and present the reason why maybe you can't do things the way they want. So here's an example. The mobile team really wanted to have more mobile items put into the quality maturity model. And I had to say to them, we can't put in too many mobile items because that will be too many items in the quality maturity model and people will get frustrated. So just try and listen and then give good reasons why you can't do something. Thanks, everyone, for joining in. Thanks, everybody.