 It's really exciting to be here today. I've done similar talks across the U.S. at different colleges. I've talked in India, but this is my first talk in Armenia, and we've got quite a turnout here, so it's pretty exciting. Especially this is pretty last minute notice that this was announced, so thanks everybody for coming. I'm here really to talk about Enterprise QA today, and it may be different than maybe other conversations about QA you've had. I'm going to lead in a little bit with a conversation about our company. I think it's a really good example. We talk about Enterprise QA, what we mean. There's so many apps in the world today. There's so much software that gets used in the marketplace. And really when you talk about Enterprise QA, we're not really talking about that app that we developed for teenagers to go chat with each other over lunchtime. AdTask is a company that really drives the workflow of enterprise organizations. Our client base has a pretty substantial number of Fortune 500 companies. And what we do is we drive their workflow. So we drive their method of business to say that the things that they do to drive their business, we consolidate, we make more efficient. So if we're unable to put quality on our application, we're unable to drive that, right? We have a number of Fortune 500 companies that can't do their business. So when you talk about Enterprise QA, you're looking at working with these companies, wildly successful companies, and having them use your software to get their jobs done. So when you look at that, you look at the importance of QA, I think it's pretty evident what they expect from you. You're looking for a world-class quality. So as we go forward, you know, let's talk about what QA is. And I'm going to go over some of the things I've heard over the course of my career, some of them from my own team, which I'm not exactly exactly proud of. But over time, I tend to ask this question of my teams as time goes on to see where we've gotten as a team and what we think about QA. So maybe some of you who are in the industry can comment on this and look at this. So how about finding all the bugs? Is that what QA is about? What do you think? Would it be nice if we could find all the bugs? Wouldn't that be great, especially before we roll to market? I don't think we do a lot of that. How about something we do right before we release? How about that? I think we talk about the different methodologies that roll if you go back 10 years ago that actually to say something that we really wanted to do right before we released. Didn't quite get to. How about the sole responsibility of the testing team? Is that what quality is? We go, we roll code, we develop, and we just roll it over to QA and they'll take care of all the bugs. That's not really what we do either. A necessary evil of software development? This is mainly from business. This is almost an executive standpoint. We kind of have to test so therefore we do it. How about with that thing you do where you can't quite hack it in development? Do we just, the really smart people go into development and then people don't quite make it in development good at QA? I certainly hope not because you're all here to talk to me today and it's going to be a pretty boring conversation if that's the case. So how about a dead-end career for a technologist? Do we need smart technologists in QA? Is that something that's important? So obviously, I've kind of set it up but all this stuff is wrong. What I'm going to do over the course of this conversation is really go through this presentation and show why this is wrong and how the industry's changed to really and require all of these things in QA. Let's talk about going fast. I hear a lot of my bosses that's the primary focus of what they want to do. So what's it take to go fast? We want to deliver fast. We want to have great code. We want to push it out to market as quick as possible. How about this? If we push something too fast and we don't do the right job on it, it creates problems. It doesn't matter what kind of product you deliver. If you go too fast on this road, you might end up here. So pretend you're the manager of this project. You're rolling it out to a Fortune 500 company. Is this acceptable risk? Is this what you want to do? You still want to go fast though, right? So what do you think is better? Do you like that or how about that? Does that look a little better? So anybody recognize this road? Do you know what this is? So this is the Autobahn. There are no speed limits on the Autobahn. So why is that? Look at how it's designed. Look at how we fit the architecture. Look at those guardrails, right? Guardrails let us go fast. So when you think about QA, if you take nothing else from this presentation today, I want you to think about one thing. It's probably not something that you've heard before to deal with QA. QA, real enterprise QA, is about innovation. It's about letting your product team, your development team really move fast. And that's a hard thing to do. But if you set up an architecture that lets you go fast, if you set up an architecture that lets you go fast safely, reduce that risk, then you're able to reduce fast to go fast and deliver to the kind of companies that we want to. So this is a pretty common slide in the U.S. Has anybody seen this before? My team doesn't count. So I want to talk a little bit about the evolution of QA, of how we went through and where we came from about 10 years ago to where we are now. So this is a very overused slide in the U.S. especially, right? I think most people who have had some experience with the U.S. have probably seen this, or if they do any research on QA or on Agile online, they've seen it. This is a big example of what it means to be an enterprise software and what problems we were dealing with about 10 years ago that we've solved some of by now. But basically, I know it's really hard to read and I apologize for that, but basically what it's talking about is the different teams and how they perceive it. That first slide right there on the left, and as you can see, the whole thing sort of surrounds them around a swing, if you can do that as the product. If you look at the far left slide, that's really how the customer describes their problem and what they want solved. So if you can look at that far left slide that's there, does that look like a really good solution for a customer problem? I don't think the customer really knows what they want. So it's going through the problems like that. So it goes over next how the project lead understood it. So you see any problem with this swing over here? That's right here. Does that mean it's going to be a little uncomfortable for the person using it? So as you go forward, you can kind of see the transition. So this is how product designed it. They said, well, the customer wants that. That's what the lead told us. They want to swing into the tree, but we don't want to swing into the tree, so let's just make it a whole. I don't think that's really how trees are designed. So I don't think that's going to work. So as you go through, you see that's how the coder actually coded it. It doesn't work. That's how the business handle that described it. We went to sell it. This is how we tried to sell it. This is how we build it. As we go through, the core concept of it is on each part of it we were really consolidated. We were individually siloed. So the conversations that people had around these things, as we went along, everyone had a different idea. And the customer didn't know what they wanted. This is what they really wanted. So, abolition of QA has been a lot of dealing with that. These are the kind of problems you deal with. So when you go back to QA and responsibly defining all the bugs, some that's really about being liaison between the teams to make sure that these kinds of misunderstandings don't happen, or the glue that comes between product and development to make sure everybody's on the same page. So talking about it back 10 years ago, I think most companies were operating in waterfall. The biggest problem is as you work, individual silos, we do all these things separately. We don't collaborate. Projects took a very, very long amount of time. Projects that were scheduled for three months could take up to two years. And then when you get to the end, projects were over time. The schedule was run over and out of budget. What really suffered here was pest. So you're not going to be able to see this apparently at all. So the idea here is, you know, the idea of the theory waterfall was to do all those siloed things, right? In practice, you'll see the X over this right here, and that's really what happened to test very often. You get to the end of the project and we were out of budget and out of time and what suffered was QA. So it's a really, really bad model because not only would we go forward and we run out of time to test, but also that project scope would change over that long period of time. Customer expectations of what you were going to deliver, that changed either from what they wanted or what we could actually do. There was a lack of collaboration between all the teams. So what got designed, by the time it got to development, even if it wasn't feasible, we didn't have enough time or money to actually change it. So we sort of drove forward and hoped for the best. And also time for customer feedback. That's probably the biggest one, right? Is that, you know, you look back at that tire swing that we worked with, that that's what they really wanted. You know, you roll something out to customers that can give you feedback on that before you develop the entire model and roll your whole team to it. You get that feedback quicker and you fail faster. So that led us to Agile. And I don't want to get into a large dissertation on Agile, but just to say that, you know, we were failing faster. We were moving forward. The cycle time was better. The involvement of QA for the collaboration, all that brings in. So this is really what we got out of working with Agile. We had this rapid delivery of code so that we found this way to move much faster with smaller pieces. We had this engineering collaboration across product and QA and development that we had that conversation. So when we found out that Dev had a different idea than product and QA of what we were supposed to do in working with those three groups, we really discovered that before we got to code. Fail faster and learn. We learn more from the market. We learn more. We roll things out to customers and have them say, you know what? That's what I told you when I wanted it. But now that I see it, that's not really what I wanted at all. And be able to go back and meet their needs in that way. And then plan, react, and compete. If you have four companies and they all have the same idea, the team that can do it the best and fastest is going to win. So Agile would let us do that, figure out what the minimal product we could roll out to compete in the market and get that client base. So when you talk about the evolution when it's gone from waterfall to Agile, challenges for QA is that when you say something's Agile or waterfall, that really doesn't define the QA methodology. Just because you're Agile doesn't mean that tells you what you need to do around your quality assurance. Go back to that original example. If I'm developing software that helps high schoolers chat over lunchtime, I can handle a couple bucks. It's going to be okay. They're going to be a little resilient to that. If I'm working with a Fortune 500 company and they can't get their work done because my product doesn't work, the comprehensive testing you need to get that out is totally different. So one of the challenges faced in QA is as these methodologies have come out they're really software development methodologies. They weren't geared for QA. So in the quality industry there's been a lot of conversation and talk about what does that mean. You're Agile but what else ties in and the simplest answer about that is it really depends on your client base and your objectives. And under that model there's also this QA methodology. So in talking about generically whether Wall, Waterfall or Agile if your organization does either the question is where do you test. So I'm going to talk about it generically for a bit. I'm going to talk about this from not the perspective specifically Waterfall or Agile, just QA best practice. So if you looked at this these are generic phases in any methodology. So where should we test? What's the best place? So it's a little bit of a trick question. The main answer is everywhere. That's where we want to test. We want to be all across the board but the real key is as early as possible. In a Waterfall model one of the reasons I failed for QA was because they just waited. I know I was on teams 10-15 years ago where there was this idle stage for QA where you just sat around waiting for code. That's not the life we live in now that's not the enterprise QA model that organizations work with. So what strong enterprise QA organizations do is they work right from the start of the product. So you're involved very heavily in requirements. You're testing those requirements and I'll get into a little bit of metrics after this to show why but you're testing that very early in the process because really that's where defects start to happen. That's where misunderstandings happen. That's where we look at what one person in product understands and you have a mistake and if you don't look at that and identify that early that ends up being this understanding that mounts to a lot of rework. So we want to test requirements we want to test code we want to test the product as it's released and we want to test delivery but we want to be a part of the whole process so opposed to that Waterfall model there isn't one area we're collaborating with all teams at all times to make sure that this goes out with quality. We're a business-based test organization. When we test, we test early. We work very closely with product groups and principal engineers to make sure that the stories and the requirements that we roll out that they contain the logic and that everyone gets on that same page. One of the advantages for QA in this way is that QA doesn't really build anything. We're not innovating. You have the product team that has to build things for requirements has to find out what the client wants. We have engineers in development that need to develop code that need to manage this to make sure that the product actually meets the needs it can actually be feasible and meets the needs of product and the client. But QA we're sort of simulating this information we get both sides of it we're technical on one side where we're driving those requirements but also we're working with product to make sure we're what we need. So a lot of times what you get in a requirements-based test model we've defined something we're having defined something and we go to write a technical test case on it and it becomes a pretty big gap and we have a conversation about that gap and what that would have amounted to is in the situation where we weren't looking at those requirements development would have gone and done it on their own because they can create it. So I always make the joke that if you give development a requirement on the napkin that they're going to give you something. It doesn't matter what you give them for requirements they'll come back with something. The question is, is it what product really imagines is it what we really wanted? So in a requirements-based test model we really on the QA side we drive that logic. We make sure everybody's on the same page. If there is a gap in requirements we fill that gap and we do it before we start code. The advantage of that is we identify those gaps they're called ambiguities you could also call them defects and requirements we identified them early we make sure we're all aligned with our teams it gives us some planning and estimation transparency if we bring up problem areas early we're not finding that out mid-code we're finding it out early in the process this also drives the efficiency in our automation so as we work forward to automate code that logic's very clear when the people who are doing automation really focus on the technical aspects of the code they don't have to worry about the logic of the app that's already ready for them and then it really drives clear business decisions if you come up with a major ambiguity and gap early on if it affects the feasibility of what you're doing you can give that up you can move on and do something else if you find that out when you're already coding and you're already midway through that's a problem you've already wasted your cycle time here's some basic metrics I don't know if anybody's seen this this is a pretty old slide it drove a lot of what went into Agile and certainly driven a lot of requirements-based testing but it's really about where defects originate and the cost of fixing them so really interesting I think if I had asked that question before this slide came up I think the common answer is defects happen in code and the truth is we can defects in requirements it's not so much that the code doesn't work I think if you go to most organizations development teams are pretty technical they know what they're doing but they're developing to the wrong things they're filling in gaps that exist in requirements over 50% of the defects that we have are found in code so it seems like that's a pretty valuable place to test and then the advantage of finding those defects I don't know how well everybody can see this but this is basically all the generic phases as you walk through them if we find the defector requirements it involves us to maybe add something to a story maybe have one more meeting a conversation we find that in system integration test or if we find that in production that's a lot more work that's heavier on schedules that's reworked for the teams it may not be even feasible and then if it goes to a client and maybe just perception of your application that you don't have quality or it may totally break what they bought your product for you may lose that sale that's basically around the evolution of QA kind of where it's come some basic movement of how that's affected it another very large conversation around QA is this manual versus automation conversations I've been in regarding QA where people have said manual is a thing of the past there's no more manual QA we just need automation so I'll talk about that a little bit so this is industry perception around manual testing and I think this is kind of how a lot of organizations use manual testing I don't know how well this is coming through over here this is one of my favorite slides it basically is a big circle it says bang head here I think most organizations are saying is they really think well we're going to get a lot of people and we're going to throw them at our application and they're going to shake it really hard and all the defects are going to come out and it's funny to listen to but it's honestly how many organizations do their QA or think about it so people have been in this industry when they think about Agile and they're like well we have to automate we don't need any manual testers they're really thinking about manual testing the wrong way we're going to get the defects out they think about it being time intensive hey if we can automate this it will be much faster so therefore we should have manual testers they think about the human error around it well these are people testing if we have machines testing then they'll catch the bugs every time and then they think of it being as a costly investment we're spending money on getting individuals when we can actually build this into some kind of technology and some things around this they're correct in some ways if you're treating testing in this way if you're treating it like brute force testing that's not really the value of manual QA so I'll get to that in a minute about how the proper way to use manual is so automation pitfalls as you drive forward these are the pitfalls that these same people when they drive to automation run into it's a learning process for a lot of organizations basically how working on people who are in automation it's basically giving them two jobs to get them on both sides of the fence so working someone who has spent their entire life coding and then saying well you're in QA so therefore you should be able to take look up requirements and build comprehensive test logic that's a completely different discipline but in removing the manual component you end up getting that so you end up with this poor business analysis these test logic gaps that show up in the application because they're not filled feasibility study if we automate everything well not everything should be automated it's not efficient to automate everything there are areas where we don't want to automate things that aren't these last two things really working with things that aren't repeatable working with things that aren't maintainable so there are areas where we had to churning more time doing automation that's not workable so problems with regression management managing your logic a lot of times they're so caught up in the code when they drive it forward they don't manage the logic so you end up with poor logic over time it just tends to fall apart you lack traceability and really we lose that skill concentration if you're an automation you're a programmer it's a code discipline if you're not coding all the time that's how they increase their skills they code all the time you keep pulling someone out of the code to have them do something else most of them in that discipline that's not the career they want to pursue it becomes a problem so let's go to a strong manual testing and look at a strong staffing model this is generally how I've run teams and how I felt it's the best way to apply these two disciplines so a strong manual test is the logic owners you're looking for someone who does that business synthesis that can work with a product team that can work through requirements that understands the logic side can understand what the client wants and then take that from the technical side and put that together to understand what are testable requirements product generally runs things from the client point of view they don't dip into the application development goes and they give you a feasibility study can we code it but that business synthesis that takes a sharp mind people that take that figure out what's the right level of tests what do we want to test what actually targets what the client wants what things on the technical side are we going to want to make sure it works for the client doing ambiguity reviews finding that gap analysis finding those defects and requirements where they will code doing that test development setting that efficient logic to say these are the tests we're going to write and then doing exploratory testing machines are just not going to find everything in your application we cannot possibly write enough test cases to catch everything across our app so there's just things that you're going to find by looking at the app and putting people on it so strong automation testing ties into that so as you work through to automation testing we're really looking for developers and test I'm not looking for someone who works on the logic side of QA those skills happen over time but they're mainly built around the technical side of what you're trying to do so working through this we want people who know how to code we want people that can efficiently write these tests and do them at the right levels we want to work with clear defined logic if I can hand requirements testable requirements from a manual team and hand them on to a true programmer who has that discipline we can rapidly get those tests into place and that helps us build our feedback loop for QA and we really want to do things that are repeatable and maintainable right we don't want to focus on things that are one-off migration tests we don't want to work on things that we can't maintain that are going to be headaches that align we want to be selective for ourselves and automation so all this ties into a real continuous integration model I don't know how many people are familiar here if you were to the last at-task discussion a colleague of mine Jesse Dattle came and did an entire presentation on this so I'm not going to go into deep detail on this but what I want to say is as we move forward and as we work in an agile model as we do the right things in automation and we drive our feedback loop to give the information to developers that they need as fast as possible what this has really led to is an entire architecture around QA to take those automation tests and every single time a developer checks in code to give them feedback immediately on what they want to do so I want to tie this a little bit in what I said at the beginning about QA being about innovation if you don't have a model like this in QA if you're not working on this you can't really innovate what we do is we build this architecture and it allows both product and dev to actually move forward and innovate against what they want to if they can take risks inside the application if they can move forward and say well let's try this in code we have massive regression structures and integration with other products we cannot possibly know when we write code how it's going to affect everything in the product immediately there's no way we can do that from a planning level but if we have this structure and we're able to do that and every single time we take a chance that feedback is immediate so if we take a risk and we do something with the code that does break half our application we find it out immediately when they push code we don't find it out later at the end of the project when it's too late to change it so also when I talk about the kind of people we need there's an original slide about dead end careers for technologists and people that can hack into development it's a pretty simple slide but it involves database integration it involves server maintenance it involves integration of many, many different kinds of tools the tests that are run require a really tight, tight development to make sure we're doing it fast and we're getting that feedback so there's a lot of opportunity there for a lot of intelligent people to kind of add to the process so to go through I talked basically about evolution I talked about technological aspects of what we're trying to do and the model that we use in our task is an example of this of how we drive code so you'll see this as an individual sprint team so as we're working forward really if you want to take a phased approach we're working on requirements in the far left you see that collaboration between QA, PM, and development and what we're feeding back before any of the teams pull a story is we're making sure that both having QA are involved in the product vision and we're both getting feedback from our particular discipline on the technical capability of the model is it feasible? is it going to be performant? can we actually do it? QA is working to build those technical test cases to build that map between requirements and test cases and filling in those ambiguities to make sure we don't leave any gaps that are going to bite us down the line so we're doing this test around requirements and until we all reach synthesis there and that work product is done that stays in a planning phase but when we're all sort of collaborated on that and that's going to roll off to the actual sprint team it can get pulled into the sprint team model and because of our discipline because splitting that manual team off from automation we're able to actually automate that within the sprint so part of the story that we run part of the requirements is not just developing code but it's also developing those test cases that are going to make sure that code works so as we work in the middle here we're developing both the tests and the drive it and at the end of that process as we work forward we're going to pull in the manual team for anything we could automate or should not automate and do that testing but also work exploratory to take a look at what we've done so you can see the collaboration across the whole model and part of that exploratory evolves the PM team as well we pull them in to make sure that before we finish off and sign off a story that we actually were supposed to do for the client and then as we roll out to this point and they'll merge that into the master branch where it pulls in all the rest of the product and that's really where our continuous integration fires off and every test that we've written that's automated will go against that and that will escalate that risk and tell us what do we break in the app did we do good things on integration here do we have a lot more work to do and give feedback if it turns out to be risky the team can revert that code and fix it immediately but no matter what we do we're not blocking anybody else's work this is sort of a single scale and I can actually on the screen expand the number of teams we have we actually have eight teams that roll code at the same time but each of these are individual sprint teams and this maybe puts it more in perspective all these stories are being rolled all these ideas all these things that we want to do for our clients and all that risk is being fed back to the teams to let them know where they are and as those stores become ready the combination suite actually looks good and we're ready to release code so we talk about the talent needed in QA the changes over the few years and there's a lot of need for people in this industry and it's kind of exciting to talk at college here and I remember I go back in 25 years ago when I sat in these discussions primary in my mind was obviously what can I do with my career what can I do with my skills and the interesting thing is on both sides of it on the manual side for logic there's no discipline if you're good with databases if you're good with servers if you just like to play with tools there's so much that we can do across it we build from manual test developers we build that logic we need that business synthesis and that logical mind we need pure coders our organization is Java end to end when we hire people in automation when we hire our developers and test to get into the organization as our developers do the discipline that we need on both sides is just equally important and then for our CR architects it's such a challenging field to find people with the skills and different tools in different areas that they do get to play with lots of cool stuff so at the end here technological leadership comes from all of that people from all over the industry that come in because they can lead over these different subject matter areas so there's a lot of need and a lot of push from that so the question that usually comes out from that if I have one-on-one conversations about this with people is really what does the industry reward you for for these skills so I don't know if you can see on the left but I pulled basically an assessment of enterprise QA salary basis for Armenia so if you look across the gray where it starts is kind of your minimum but the red is your potential so I think if I pulled up a developer list of different disciplines of straight core development you'd see it pretty much matches up the same amount so as we look at disciplines and product and development the expansionary, especially in leadership as you grow there's so much need for this in the industry and the focus has been off the last few years but with Agile with that evolution and revolution has come on we've gotten a lot more intelligent people to be pulled into QA well that's the presentation for today I want to leave it open for questions I think we have some more time left so if anybody has any questions for me I'm available for probably the next 15 okay well I appreciate everybody coming out today I want to give some information if you want to reach out to me or if you're interested in that task in the careers we have available we also have a blog that's pretty interesting all our technological leaders are blogging on there putting out the challenges we face in the industry from product all the way to development so if you're interested in this or you're interested in career with that task take down the information and look forward to hearing from you is there a chance to see this presentation PowerPoint on your blog or somewhere yeah I'm asking to have it put up to the blog probably next week I'll be out there thanks everybody it's more valuable than affirmation so how we use manual testing is to really just sit at that early layer and build out the core logic they sort of feed together I want to say it's more important but they go hand in hand so the people who work in development are in our developers and test they're pure coders we want them to be efficiently driving that you can't automate logic in a sprint cycle and expect them to do that exploratory level around it so how the manual team works is they really develop that logic and they feed them so it's basically a checklist and then they can work on the technical side to figure out where do they want to drive this test down to we work in Sikuli we work in Selenium but the developers and tests that work for me we drive deep down layers we write unit tests we write tile test, controller test we work in spring framework so getting them to be that technical and have them drive the logic as well that's two jobs so I'm working in coordination they're both equally important if you have poor test logic it doesn't matter what you automate if you're testing the wrong things or you're not testing enough or you're not testing in the right way if you're testing efficiently you're going to have problems machines only do what you tell them to do so that's really how they work that's your question so what will happen to the product teams they'll deliver us a story kind of an agile story and how we work with it is at the same time we'll sit with a principal engineer I'm sure that's it they'll work in coordination with that group and what they'll do is they'll deliver what I call a requirements to test map so they'll take those requirements in that format and they'll drive them into all the individual pieces that make sure that we'll be able to validate that so all our test cases are done at the requirements level so when we get done with those requirements the tests are embedded into it and product, development, and QA is all seen so that's before anything gets pulled by sprinting so when the sprinting gets it they have basically all the information for QA, for development to drive that story and the great advantage of that is if you have those testable requirements it's like going for an exam hey by the way, before you go to that exam here's all the questions to the test here's everything that we're going to test you on and it gives development that perspective around okay, this is everything I need to do I have a clear idea of that now if I code it's more test driven answer your question is this somehow like you're writing your test for API or... at that level it's black box so we're doing it from a customer perspective we're kind of joining what product does with what development will do so we're taking the technical side and merging it with the black box with what the customer expects any questions? I'm studying in one of these departments what are some skills that I should have to decide okay I can go in the QA field I can become a QA or what are some tools and some information that I should start learning because we don't have such a profession here in our community before becoming a QA right, that's a problem in the industry it's the same in the US individual disciplines for QA we tend to get them from other disciplines I think the easiest answer for that is around automation if you're going to become a developer and test you need to be excellent with your code that's really where you want to focus we hire core developers for automation positions for manual it's a mix of things business logic understanding that how to look from a customer point of view communication skills are hugely important any skills around formal logic to really identify when you look at something what are all the ways that we work around it a piece of logic have all the ways that you would approach it so it's multiple skills but I think for manual QA it's very hard to find people who are the right way in that discipline because there's not one career to work on if you have a logical mind, if you can approach things in a way that really breaks down the logic of the situation if you've great communication skills you're written and verbal and you're able to bring that technological side from the information technology point of view where you understand how technology works and you can build that together I think those are the things we look for but it's a challenging field to find the right people in because it's a lot of skills to kind of bring forward anybody who's a little shy to reach out for me on LinkedIn I'm happy to be there we'll be blogging on the ATAS blog you'll see my name and my colleagues up there feel free to come out to those discussions and talk about it I'll try to get the presentation up there on the blog next week so if you want to comment on that as well it'll be really great thanks everybody