 My name is Sushant. I have my colleague Nikita with me. We are working as QA analysts with ThoughtWorks India. So today, as part of this case study, we're going to tell you a story. This is a story of remote enablement, a story wherein we're going to talk about how we transformed forbidden land into a land of opportunities by pushing a team of QAs to adopt open source. So this is a team which was heavily bent on and built on commercial infrastructure, and they have invested a lot on it. So it was a heck of a challenge. And we invested a lot of time. It's our experience that we are going to share that we have a transition and empowered from our six months of experience. So let's see what we have in this tour. All right. So let's get started. We'll start the story with what the scene was currently with our client. Now, as Sushant said, they were heavily bent on using the enterprise solutions. And as we all know, these commercial solutions, licenses, machine bound, they pose their own challenges and have their own technical difficulties with it. So when we spoke to our clients and figured out what kind of challenges that they were facing, they came up with certain challenges like tech support. For any new feature that they wanted or a bug fix, however big or small, they had to always depend on the tech support for that particular tool to get that fixed. And hence, this increased the turnaround time. They had no control over their entire test suite in terms of internal workings of it. But they also told us that the particular solution that they were using, considering how it was, the learning curve for that was a little high. Another interesting factor with our client in particular was they had invested only one single resource to do the automation of the entire mobile app that they had and who was completely separated from the development team. So the development team went on doing their work, but this single resource at the end sat and wrote the entire test automation. But do you guys think this is just the reason that they cannot move away from the current solution that they were using? Well, when we looked at it, we thought, maybe it was something else. Maybe it was just their reluctance to write code and adopt new technology. So moving on, we started our journey. We started collaborating with them. We started talking to them and figuring out what we can do to help them overcome these challenges that they were facing. But as we just started on our route and every small action that we took, we kind of faced a reaction to it. And thus, we'll briefly talk about certain resistances that we faced when we initially embarked on a journey. So the client that we were working with were not based out of India. So we had this problem of distributed location and we had to enable people who were not co-located. This problem seemed very huge to them and thought that it was impossible to do. You really, they were like, you really can't do that. You really can't enable us sitting in some other continent in some other time zone. We took it up as a challenge and said, definitely, there should be a way to do it. And we went by the enablement approach. Even though we went that, it worked. We had a lot of hangouts. We had a lot of calls, discuss things over it. Even then, we faced a little bit of problems in terms of when they were actually trying out this stuff, we weren't there to help them through. We tried to mitigate it by having certain discussions focused once, but still there was resistance in that corner. The next thing that we realized and we thought was, we thought of moving them to open source. So what would be the best way to do this? These guys had heard about open source. They had heard about all these cool terms like gate, cucumber. So what we took the first step in our journey was to explain them all these basics instead of just jumping to the tool. We said, we explained them all of this in pretty much detail to how much they could understand and take it further. While doing this though, we kind of went a little over the top and ended up explaining a little too much in detail of our infrastructure, which we thought was a mistake and which actually drove them a little bit away thinking, oh my God, there's too much programming you have to do. I'm so new, maybe I can't do it. This is too much. What the hell are you talking about? What is a mocking framework? I don't understand all of this. This is when we learnt, we realized that we need to gauge whatever information that we are passing on, see how much they're actually accepting and then go ahead with it. Our entire testing infrastructure was based on Ruby and Gems and we were working on Mac since it was an iPhone development. What we thought that it would be fairly simple to just move our testing infrastructure from Mac to Windows since it's Ruby and Gems, it shouldn't be anything difficult. Again, we were broken down there. We faced so many problems just trying to emulate what we were doing in our testing in Mac to Ruby that this seemed to be an effort which was useless. So these are some of the resistances not just from the people but even from the machine, the infrastructure that we faced during this process. Continuing the resistances, their entire infrastructure was based on a certain enterprise solution and the way they used to run their tests was in test clouds. So all these guys used to do was they used to write the tests, push that in, they, the tests used to run, they used to get the results back. That was the only interaction that they had with their tests. How the tests are running, are the devices they're instrumented or not, what kind of tests are running, was something that they were not really delving deep into it. We came up with a solution or an approach, explaining them the importance of that, yeah, maybe the test clouds are good but we could also do those same things in a smaller scale at our own, in front of our own eyes. So we talked to them about CI, about using simulators for automation and getting things done right in front of your eyes with simpler tools. Moving ahead, so these learning from all the resistances that we faced initially when we just approached them and were thinking of the strategy, these were some of the learnings that we found. When we were interacting with all of, with the people out there in our client, we realized that there are different kinds of people that you will encounter. You will encounter certain people who will be open to change, some who won't, some who will be but would still be hesitant to go and step into the unknown. So what we learned out of this is, we need to identify such people and then our approach should depend on that, not just a generic thing that will work for everyone. Another question that our client always asked us was, how do you think you would go about doing it? What would be the approach that you would take moving us away from the enterprise, rather moving them away from the enterprise to open source that you guys are talking about? We came up with the approach that we use in our day-to-day life. Whenever we pick a tool, we spike it first. We see, we test it, whether it works. If it does, great, then we adopt it. If that doesn't, we drop. Or if there is something that is better or easier, simpler, we go and adopt that. That is the flexibility that it provides. This message was something that they understood when we explained it to them and this resulted in a very simple framework which even our client QAs who are not that great an automation, just beginners, were able to contribute to. The other message that we very, very strongly put to them was, since all those challenges that they were facing, since they were completely away from their testing infrastructure as such, was you have to write some code on your own to be, to control it, to be the master of it. Just waiting for an expert or someone to fix something for you or get that done will not really help you in the long run. So we talked about the resistance here. So trust me, no one believes in your solution until you create something that pulls them out of misery. So here it was a challenge for us wherein we thought that we are actually not able to create that impact which we wanted to. We are not able to mark that footprint into the system and cannot change things. So what we did? So what was the solution that worked for us? It was for us, it was just a simple web driver script. So they had a problem statement wherein they wanted to automate stuff and they wanted to do it using their very heavy enterprise class infrastructure, which would have taken more than a day for them to automate the same thing. And with use of web driver and certain configuration file and some setup in place, with some libraries in addition to that, we showcase them how we can scale and yet keep the infrastructure or rather the framework simple. So that was a showcase that we did. So it becomes important that whenever you are up to this task or something, whenever you are showcasing something you need to keep in mind that if you can build, it may be smaller. If you can build something that can scale, you should go ahead and showcase it. So that's something people will believe in. Another aspect which we highlighted and it went well with them was the flexibility and the ease to migrate from one tool to another. Say, so what's the idea here? I mean, you don't want to stay with one tool forever. It may be scalable, it is serving all the purpose but things changes pretty rapidly, especially in mobile applications. You have operating system device, the fragmentations and the rapid pace which the market moves. So you have to keep that into account and change your tools accordingly. So you cannot really go away from idea of migrating from one tool to another. And with this simple framework, we were able to showcase that how easy it is to do that. So we can actually, without changing the test, without stopping your execution cycle, you can move from one to another and things are working as is. So it was something that really opened doors for us. Just to add some icing on the cake, we emphasized a lot on faster feedback cycle. Even though you have systems in place, you have selected the right tool set but it becomes very important that you have adopted a right test pyramid so that you put a right test at the right level. So if there's something that should go at the unit, it should be written at the unit level. If there's something that should be at the acceptance level, it should be written at the acceptance level. So you need to make the call. At the same time, there are approaches like TDD when you're really moving away from idea of, you know, doing testing at one and waiting for a developer to fix it at some time later in the future and then doing the testing again. So there's no cyclic process. You want to work alongside. Then there are some of the approaches that you can try. TDD is just great. CI integration. Yeah, CI integration is a fascinating world. I'm sure many of you must have been working with it. I'll talk in detail a little bit about CI integration going forward. I've already mentioned this was a journey. This was a story that I'm sharing with you guys. So this phase was a point wherein we wanted to reflect on the changes that we have done. We wanted to identify things that has put us into driver's seat. So let's look at some of those things, some of the recommendations and highlights. So yeah, it's actually true in any case. It's not just mobile web testing or any particular type of testing. It's imperative that you find the right tool for your testing. You select it with lots of effort. You tweak things and you make it work for your own. You customize things a lot. So it is important that you make the selection correctly. And this is one messaging that we have done with the QA team that we're working on that you have to put in a lot of effort finding the right tool. And what comes handy here is approach of fail fast wherein you can change things rapidly and try if it is really working, if it is supporting the controls that you have, if it is supporting the APIs that you are consuming, if it is not, you drop and move on to something else. So that fail fast approach really works. So what's the benefit that we get as a QA? What advantage do we have here? Yeah, okay, I moved from one system to another. So the biggest advantage, the way we look at it is the kind of control that it gives us, the flexibility that it provides us. It's not something wherein a QA is doing a limited task or a task which is within a cycle of the whole development process. So here with the right tools and process in place, you can work with developers or designers right from the point where your UI mockups are being designed for a mobile app to the point where your apps are being distributed, right? So end-to-end cycle is under review of a QA. So that's the biggest advantage that you get here. Next, there was an interesting question. When we talked about changing things rapidly, as I mentioned, there was a question from their side that why do we change? I mean, we have everything working. We have a scalable framework. Things are working pretty much well for us. So why do we change? So there are some of the important criteria that you always keep in mind when you are moving away or selecting a particular tool over another. But with open source, some of those interesting and important aspects could be API support so that you make sure that you have API support for testing, automating, and writing tests against the kind of app that you are actually testing. Having a right abstraction layer so that a new person who is coming in into the team, he can also contribute to the whole testing effort so that abstraction has to be at the right level so that it's not very granular, not at the very high level so that it's not making sense. So you need to choose the abstraction layer correctly. And these tools, which are available in the market today and new tools are being written every day, you have these options available so you can evaluate. So it's pretty easy. Support for latest OS and custom controls, yeah. So this is one thing that you have to keep in mind, especially in mobile world because of the fragmentation that we have. Just think of example, when you're moving from iOS 7 to iOS 8, things just break. I mean, it's like a nightmare when you go about changing things from iOS 7 to iOS 8. So you need to keep an eye on changes that are coming up. You can test them early with the beta and make the development align with those as well. So here's one thing that you have to do on QA side as well. Plus, good integration with development environment. Yeah, so when you are selecting a tool, you need to keep in mind that you need to be aligned with development so you cannot really run in a sequence. So all your tools need to plug into the developments in OS as well. So after this journey, we realized that we made a good progress. So these guys, the QA team, were really liking the idea. They wanted to do it. They started contributing as well. So here, in this phase of the story, we had reason to celebrate. Yes, I mean, we had actually tamed the dragon as the name suggested. So let's look at some of the highlights and some of the recommendations that we made and what made it possible, actually. So looking at this list, we're starting with simple and configuration installations. So they used to take days just to put their infrastructure in place when they were doing it for the first time. Even when there is a maintenance, when there is a change, they used to take days for them to change one thing, one simple thing. And it was very easy for us. Just run one command and a skeleton is in place where you can get started with your test. So this is one idea which they liked a lot and they decided to move on, work together with development, not after them. So the idea of logging defects and waiting for them to get fixed at the end of the development cycle and test it again is no more there. They are working alongside devs. Third idea was about the ability to refactor the frameworks with good abstraction and scalability was under QA control. I mean, even though things are working fine, you can go about changing things and make them better and improvise without stopping your execution cycle or without disrupting your existing test. So that's the biggest advance that we have got here. Yeah, so ease of migration. As I mentioned earlier, I have emphasized it a lot. The ease of migration that we have here from one new language binding to another, one protocol type to another. So you have different options that you can choose and move away from something that is not working for you anymore. So you don't need to really work with archaic stuff. So this, the last point here, I would like to elaborate a little bit on this. So this was something wherein we wanted to gauge our success. How do we know that the kind of change that we have tried to bring in has really worked out? So we wanted them, that was rather expectation, that they go about writing some test at the right test layer and bring down their regression cycle from, say it used to be three to four weeks to three to four days. You know, so move more and more tests to your test suite and bring down that regression cycle. So that would be the real test and they are on it as we are speaking and things are working pretty well. So that's one reason to celebrate. Towards the end, I would like to put some specific points around continuous integration. So it was really a stress buster. Continuous integration, I'm sure many of you must have been working with it when it is put in place, a QA can watch over things right from your commit, the first commit to the point where the app is being distributed. Your app may be generated for QA, devs, beta productions with the different certificates, different provision profiles, different toggles in place. So all those things can be controlled right at the CI level. So you need to put in extra effort and test it separately. So that was the advantage and the QA team over there was really fascinated with the idea of putting this in place. They really liked this fancy world, although it was fancy but effective. So that's about it. Although our job was done but actually not done because as we evolve, we have a responsibility to bring them along and make the same improvements. So yeah, that's about it. Thank you and happy testing. We'd love to take any questions you guys have. We have a few, yes, we have spiked out many. I mean, if you can give me some specific areas around what are you targeting? You're talking about mobile web or native apps? Okay, so mobile web, we have actually used WebDriver. It has worked pretty well for us and we can try different variants of WebDriver that was available in the market today to use water or plain simple Java bindings with Selenium. So they have good language bindings as well. We are using Ruby for the same. At the same time for our native, we are using native and hybrid apps. We are using Calabash. We have spiked Frank and Monkey Runner, ride instrumentation, which is like default with Apple. We have tried out all the stuff but we settled on to Calabash. It has worked really well so far because plus it has active developer community support. So things change pretty rapidly whenever you have things moving from one OS to another. So that's the advantage. APM, actually we have spiked out APM. It is actually very good. Underlying mechanism remains the same but the ease of writing test at the higher level is simplified a lot. So yeah, I mean, I mean, since underlying mechanism remains the same, it works with the same WebDriver protocol. Nothing much changes in the background but yeah, of course, if you really like it on the interface, if it's the way you want to write test, I think it's a great idea to go ahead with. Anything else? Any more questions, guys? Thank you, thanks a lot.