 Hi everyone, good afternoon. So this session is going to be about app testing, continuous integration, and automation. Now, what I will be doing is I'll try and apply some of the concepts and principles of all these subjects that are used in software engineering as per se. And I'm trying to apply that in a mobile context. So before I start, I just want to know how many people have actually tried automating something in their app delivery process? Very few. So I'm hoping that there would be something in it for you guys to take back and try out within your delivery process itself. So to kind of set the context, we look at the agenda. So we look at the current state of play in terms of how app delivery is happening and what are some very major issues that are happening in the app development world. Then we look at some very common problems. I won't talk too much about these problems because I think most of you are familiar with these problems. We'll try and focus on some solutions that we have from our side using some open-source tools and processes that we've come across. And then we'll kind of talk about how we have actually used these tools. And finally, I'll try to wind up with a quick demo of time permits. So let's look at the current state of play. So any guesses on how many apps are there on the App Store as of today? Any guesses, a wild guess? Sorry? So it's about 1.4 billion, sorry, million. So that's a huge amount of apps. So one stunning statistic that I came across is that 60% of these apps on Play Store are getting downloaded. That's a huge number. 80% of the apps that are downloaded are not open more than once. So I was actually listening to some of the talks yesterday and somebody was talking about deep linking. I forgot the speaker's name, sorry, but I found that very interesting because it kind of addresses this second point a little bit because a lot of apps get downloaded. People never associate what to use the app with and it just remains around your phone until one day you decide, okay, I don't want this app anymore, I'll just uninstall. So that's not good, isn't it? So if you're looking at the overall figure, 92% of the apps are not getting used. Now to me that's a problem with app discovery. I hope all of you are familiar with this concept of app discovery. So it's basically you built an app and you want people to use it, but they should know that an app like this exists, right? So if you look at it, 60% of these apps are not even discovered by people. So imagine the amount of effort that has gone into creating them, but not probably enough effort to kind of market them and showcase them to the world. So that's a very important problem. The second one I want to look at is the quality of the apps. So this is something I've picked up from Play Store. This is a very popular app. I have not mentioned names here because I think these apps are wonderful ideas and they're doing pretty well, but unfortunately these are some of the comments that I picked from the Play Store. So it says, got another update today. It says bug fixed, but I can't see anything fixed. The same issue I reported a month back still exists, okay? And this was pretty interesting. Somebody has been very nice to the developer, they're saying it's fine, it's a nice app, but it doesn't work with 2G. Sometimes it doesn't work with 3G. And just for information, this app is very popular in India. They're kind of used to make payments. Now the whole purpose of having an app that allows you to make a payment is to allow somebody to make a payment on the go. Now if it doesn't work on 2G and is not stable on 3G, I wonder how people are going to use it. And I'm also hoping it works on Wi-Fi. So this is not a good state to be in, right? Now the problem here is, yes, the apps are buggy. Probably not enough testing has been done on them. And the fundamental problem to me is the problem of the delivery process. Because there is an objective that has been set for the app in the second case, which is probably not being addressed. And not enough people have actually looked at addressing them before the app was released. In the other case, the first one, somebody's actually reported a problem. Either the developer's not understood the problem that the user is facing, or he's fixed it and the fix has gone missing somewhere during the whole delivery cycle. So both of them to me are problem delivery process itself. So one final statistic is on crashes. So it says that 47% of the apps crash more than 1% of the time. And 2% of the apps crash 2% of the time. So that effectively means that every 100 times an app is opened, it crashes once. And the second state is like 200 out of 100. So is that what you want your app to be doing? So if you kind of look at it in reality, so I read these stats and I was wondering, it can't be so bad, right? So I'm a big cricket fan. I keep watching cricket following the matches that happen across the world. So I have an app that I use to check this course. So I read this and that there was a match going on and I kind of opened the app, checked the score, closed it, all fine. Tried it two, three times, all fine. Four times, the app just crashed. I was actually at a loss of words to explain this because nothing had changed. Same device, same network, same user. So why did the app crash? I still don't know. So then I thought maybe one out of four times and I opened it many, many times, it didn't crash. Probably until 20th or 30th time, it again crashed. So there is a problem that's hidden somewhere. We need to identify the problem. So if you look at an industry split of these crashes, it happens right across, right? Whether it is in gaming, multimedia, news updates, business apps, M-commerce apps. So crashes keep happening. The percentages may vary, but crashes keep happening. So this is where we are currently today. So there are problems with app discovery. There are problems with the quality of the app. Now app discovery is something you can address by having a tying up with people who are specialized in that area. You need funds for that. You need resources who can be engaged for that. And in terms of the quality of the app itself, I think you need to look at your internal processes and see where things are breaking. So there were two interesting thoughts. I think one was yesterday and one was today. Somebody, people were talking about how they started re-engineering the apps. In terms of the identified problems when the app, once the app went live and they wanted to make changes to address these issues. So I found them very interesting and I found them very, very much, I could relate to a lot of problems that they were talking about. And we'll try and address some of these, probably not a solution that all of you can use, but at least some of you can take back and try implementing. So let's look at why apps are buggy. So we would definitely believe that people test their apps, right? How many of you out there test their apps? Test your apps before it goes live. Okay, that's a bit scary. That's really scary. So what I'm going to talk about now is pretty much in line with the number of hands I saw. So the first statistic here is that researchers, so this is a researcher called Christina Knight. She is based out of the US. I don't know her. It is on the net. So she found that 77% of mobile users are worried about the performance and quality of their app before they buy it. It's natural, right? Notice the point in red. 51% of app developers say that they don't have time to properly test their apps before release. I think the numbers I saw here kind of agree with that. So if you look at the problem as such, let's just go to the app world for a minute and look at it from a reality perspective. Now imagine you're building a car. You use the best of engineers, you use the best of materials to build the car. And you now go and market it, you get a customer and you tell the customer, I've used the best of resources, I've used the best of engineering techniques to build this car. So I don't think that car is going to have a problem. But you know what? We spent so much time building it, we didn't have enough time to test it. So probably when you are going at 100 miles an hour, you might have a problem, the break may not work, but it's okay, we'll fix it later on. Now once you have a crash, we'll fix it. So it's a little exaggerated to compare a car with an app, I know, but the problem here is you're creating a product and your service, the experience of using a product is going to be what is going to sell your product, right? So that's a big problem for me. Now if you think this was a one-off, there's another survey I found out. So this was done by some of the top software companies in the world, like the likes of HP and Capgemini. So they found that only 31% of organizations actually test their mobile apps. Now that's again scary, because we're talking about software companies who are saying that we don't have time to test our apps. So that again is a problem. So it's kind of got us thinking on what is the problem? Why are people not testing apps efficiently? So we asked around people. We found a lot of reasons. I don't want to get into all of them because I think you're all familiar with most of these reasons. There were probably a thousand more reasons. I couldn't put it up here. So there's definitely a problem that we need to address. So let's start looking at them. So what we did is we spoke to a lot of people. So I started by speaking to people within my team, then within my organization, my friend's circle, I did a bit of research on the internet and I tried to look at what are these causes and then we grouped them. So what I'll do in the next few slides is I'll try to kind of highlight these groups of problems and some potential solutions to these group of problems. Problem one is people. Yeah, we are the main cause of issues, right? I'm sure most of you would agree with this statement because if you look at it, typically what are the problems that people cause? Poor designs or flows and poor code. People design the app, people write code. So this is the fundamental cause of issues, right? Any bugs in your app. So if you look at that from a broader perspective and try to understand why this happens, so typically when you get a developer on board, what do you expect him to do? Say write code, build an app, test the app. Simple, right? So why is it failing? What is the cause of the problem? The cause to me is that a lot of other activities that are involved in getting these up and running are not being addressed. To write code, you first need to understand the requirements, right? So I'll try to put that in perspective. So imagine I want to build another app that shows me Cricket Scores. Pardon me for my examples with Cricket, but so I have. So I want to build an app that gives me Cricket Scores and I call this developer and say, okay, dude, build this app that shows Cricket Scores. Simple requirement, right? Build an app that shows me Cricket Scores. I give you my APIs to hit to get a score. He says, okay, goes up, goes up, comes back to me after five or eight days with a wonderful app, which we've got a details score card and they tell him, no, no, no. This is not what I wanted to see the summary of the score. Now he's frustrated because of the requirement, right? But whose problem was it? Was it his or was it mine? To me, both of us had a problem. He didn't ask and I wasn't clear, right? So this is where problems start coming in. So you need to get a clear understanding of the requirement itself. So once you're done with that, you need to understand the design, the UI elements. Of course, you write the code. You need to follow coding guidelines and standards. Somebody has to review your code. We'll talk about the code review as well. You need to check what kind of resources are you using? Are you using the right APIs or not? You need to build, check in your code into a repository. I was actually speaking to somebody outside yesterday and I got a bit of a shock when they said, no, I asked him what repository he used and he said, yeah, we are planning to use it. So probably once our first version of the app is live, we'll use a repository. Strange. So once you check in the code, you build the app. You need to do some kind of an app distribution mechanism. You have to have it right nowadays because you need to test it with a lot of people, with a lot of devices. We need to do regression testing. We need to understand code coverage, integration tests, bug fixes. So there are so many activities that got encapsulated into these three statements like write code, build the app and test it. So the problem here is that very often the developer is not allowed to give an estimate of time that he needs to build the app. Somebody does it for him and he has to cramp himself into that estimate. This is where issues start coming in, if all these other factors are not looked into. Now one potential solution to this problem is automate. You can automate a lot of these tasks, especially the ones in green. You can look at using the right tools or techniques to kind of automate these. So what I'll do in the next slide is try and take you to some of the techniques that we have used or come across. So the first one is, before I get into automation, I would like to talk a couple of minutes on understanding requirements itself. So because that's going to be the root cause of a lot of issues coming up. So if you look at that, we found something very interesting called behavior-driven development. So anyone heard about this? I can see a few hands going up and it's great. So the whole crux of the matter is that you start communicating more with all your stakeholders, whether it be a tester, your developer, your designer, the business guys or the manager, we need to talk a lot to get rid of misunderstandings or interpretations of a requirement. So if you typically look at what's happening in the software world, somebody gets this brilliant idea, a bulb flashes and says, I have this idea to build this app. And you start putting in requirements into a document and you mail it to your developer, go through it and start building the app. What happens? I have an idea. It's like a crickets code car app. I just talked about. I have this idea of an app. But how well have I captured it in the document is an issue. The quality of the document itself is an issue. I'm not saying we need to write a detailed document. I'm not one for that. What we need to do is talk. Just get people into a room and start talking, which is the essence of behavior-driven development. So it's based on test-driven development and domain-driven design. So I don't want to get into a lot of theories. It's a vast topic by itself. So I will not get into details. But the crux of it was communicate. And you use simple language, English, which is kind of universal, or something else that is understandable by everyone in the team. So another key factor is you break down your requirements into very, very small bits. You agree on an entry and exit criteria so that everyone has a common understanding. So if I take my example of the cricket app itself, what I would do is I'll say, my entry criteria is I'll give you an API that gives you the score card, the score details. And the exit criteria is you need to give me a NAP or a UI where I just see the summary of the score. That'll clear up, isn't it? Now you start communicating a lot more. You get more things done in a more efficient manner. Now, we came across this framework called JBehive, which kind of captures this whole essence of behavior-driven development. Now, if you go to the JBehive website, I've listed out the URLs here, so probably if the slide is published, you guys can go through it. The essence on the front page of JBehive website is this, write a story, map it to Java, configure your story, run the story, view reports. Sounds simple. Yes, it's a methodology. You have to adapt, try, implement. It'll fail. Try again. It'll succeed. So if you ask me my experience with behavior-driven development, what we did is probably about five, six months back, within my team, I had a requirement to build a very simple app, one or two screens. So I got my developer, I spoke to him about the requirements. He said he'll start. Then I called my designer, spoke to her with the requirements, so she started. And then finally when they came together, there's a lot of mismatch in their understanding, problems. So it's somewhere around that time when I came across this and I thought, okay, next time I should try this around. So I didn't get into a lot of details. I didn't take time to explain this to my developer or my designer. I just got them to my desk. We spent about 45 minutes, if I'm not wrong, discussing in detail about what to do. All three of us had a very clear understanding of what the other person is going to give them. And the earlier app, which is very similar in nature, took us around six, seven days to get the prototype out. This one took me two days. So there's definitely effort to be invested, but I think the return on investment is quite justifiable. Now, you might also turn around and ask me, how often is it possible to get everyone together and talk? At some point in time in your project, I'm sure everyone has to talk, right? Somebody, maybe your customer's sitting somewhere far away, US, UK, Europe, wherever, but you need to talk to them, right? So what you need to stand up and tell them is, boss, if you don't talk to me today, you are going to have to talk to me in a very bad language, probably six months down the lane, where everything is screwed up and you don't know what to do, I don't know what to do. So talk to me now, I'll ensure that I'll give you better quality. So I think that's a very important thing that you need to do in terms of re-looking at the way you deliver stuff. The next one I'm going to talk about, and I think a couple of guys mentioned about Jenkins in the slides, I think that kind of helps because continuous integration. How many of you are using this within your delivery process? Then very few hands. So this is something I think you should really, really adopt within your delivery process because have you all heard about continuous integration? Because I don't want to get into details. It's something that existed for ages. Anyone who's not heard about continuous integration? Okay, everyone knows. Okay, I saw a couple of hands there as well. So I won't get into details, but the whole philosophy of continuous integration is that you have to have a code repository. People check in code into a single place. Every time there's a check-in, you build it and ensure that it is stable so that a check-in does not affect anyone else. So that's, I don't know, two second introduction to continuous integration is such. So what it does is, it kind of eliminates issues from propagating. You catch issues earlier on and it doesn't just end with a build. You need to do some amount of validation on the build to ensure that it is stable, to ensure that it is robust and you can actually use it for your next levels. So there's a lot more to continuous integration. You can just Google, you'll get tons of material about this. The framework that we have adopted is Jenkins, which is very popular in this world. I think it existed for ages. I've used it in the past. So somebody asked me why Jenkins? Only reason is I just had experience with Jenkins and it works for me, so that's it. So the next problem I'll talk about is fragmentation. But before that, related to the previous slide, there were two more points which I would like to talk about in a little more detail. One is on test automation. Morning also a couple of guys were asking questions on test automation, cross-platform automation. I'll attempt to show you a framework that has helped us in some ways in that area. So I will not talk in too much detail about it at the moment, but what I would do is spend a minute on code review. How many of us actually do code review? Fantastic, but still 50%. So we're actually giving opportunity for issues to go through to the customer, right? If you're not reviewing code. So the other aspect is how do you do code review? Typically you have a coding standard or guideline that somebody has defined for you. It could be Google, it could be your organization or your lead, somebody has defined it. And what does it generally have? Your code has to be intended, five spaces on the left. You have to have a naming convention, start with lowercase, uppercase, blah, blah, blah, right? Is that what you want to do in code review? Or is that what you should be doing in code review? You just check it, she asks me, no. That's the very basic thing that any developer should do. What you should do in code review is actually understand the business requirement, map the code and see whether it is creating any issues with that business requirement. A, is it satisfying it? B, if it is satisfying, how well is the code written? Trust me, you can actually identify or catch a lot of performance issues if you do a effective code review. A lot of bugs are caught in effective code review. You should try and practice it and I think it's very important. So you can't automate that bit, definitely. But what you can automate is checking these guidelines and standards, which is what most people try to do in a code review activity. So how you can do it is using a lot of tools. There are quite a few out there, something called PMD, ChexStyle, if I've got the name of a couple of others, but there are quite a few out there which you can actually use within your delivery process. It's really gonna help you. So moving on to the next problem, which is fragmentation. I'm not going to get into it. All of you know what I'm talking about. But just want to throw or set a bit of context. There are two issues. One is your OS split up. So that's kind of all the Android versions as of last week. They are split up across devices. And of course, the screen sizes and densities. We're all aware of this. But let me just shake you up with one more thing. This is something called from opensignal.com. So there are sites that gives you a lot of information about how handsets are behaving and a lot of statistics about handsets and OS usage. So if you look at it, 2003, November, there are around 11,800 devices that were available which are running Android across the world. 2014 November, that's shot up to more than 18,000 devices. So looking at an increase of about 6,000 plus devices in a year's time. Now that's just going to create more havoc in this fragmented world, right? And now you're looking at scenarios where you need to deploy an app on one version of the OS on one version of the hardware. Suddenly another version comes in. How do you handle this? So to me, that's a very big challenge. I think all the developers or testers or organizations are facing. Now what are the problems? Perceive problems are that you need to check all these factors, right? So there are quite a few listed here, probably a few more that I can add on, but there are quite a few problems that people perceive that they have. But to me, the actual problem is managing your devices. So when you say managing your device, it's like how do you decide on which device to buy? There are 6,000 plus devices that got added in last year. How many of them should I buy? How many of them can I ignore is a challenge? The next challenge is, okay, assume I buy 50 devices. How do I ensure that they're all charged? How do they ensure they're all kept up to date with the latest updates from the platform owners? How do I ensure that they're made available to the right people at the right time? How do I share these devices? So many things, right? You never think of this when you just start building your app. Suddenly you're facing with a lot of challenges that you've never thought of upfront. And the other important factor is cost. So you would agree with me that all these devices, they cost a lot, right? They're not cheap. So probably the high-end Android handsets are selling around 50K in rupees. And the low-end ones that you would really want to test are somewhere around 10, 15K range. So how many of them would you buy and how do you use them? So to me, these are the two important challenges. And one solution that I came about and I think this is a really innovative solution is mobile testing in the cloud. Anyone heard about this thing? Cool. Anyone tried using it? Unfortunately, I must confess I haven't used it, but anyone tried using it? Cool. So the whole concept here is that you have devices made available to you on the cloud and you interface with them using a web UI. So you might ask me, how do I install an app onto that and how do I check stuff? Well, they give you a lot of capabilities. I've just had a very minimal interaction with one of the vendors who are doing this. They come with a cost, definitely, but not as expensive as buying a device. So what you effectively do is, this is something like a platform as a service setup that they've done beautifully. So what they do is they take care of managing your devices. They give you an interface over the internet where you just log in. You say, I want XYZ device for so much duration. You buy the time slice and you start using them as if you own the device. Only thing is you can't physically catch the device in your hand. They give you a lot of video recordings about how the app is working, what is the memory usage, what is the CPU usage, how much file storage is used by your app. A lot of runtime dynamics are actually provided by these guys, which I think is very useful if you're an app developer and it kind of opens your mind to a lot of things you probably would not ever have thought of before. If you start using these capabilities. So to me, a potential solution for fragmentation is mobile testing on the cloud, which I think I would urge you guys to kind of go and look into a little more detail and see how you can adopt it. Because the beauty of it is you can actually contain your costs. You can actually say I have a budget of so much to be used for looking at devices that I can test. I think it's pretty good and very effective. And the beauty of it is any new device that gets added, they kind of add it on to the service. So you need not worry about buying a new device. The other aspect is app distribution. So distribution is nothing but the capability of... Imagine you have 10 testers, you want to send out your app to all of them and they start testing, one of them raises a defect and you don't want to fix it and give it to all 10 testers. You want to probably give it to the guy who's raised a defect. If it's been fixed, then you want to propagate it across all the 10 guys, right? So how do you tackle that? That itself is a challenge. Forget mobile-based testing, sorry. So forget that for a moment. Imagine you just have 10 devices that you want to use. How would you send these apps across to different people? So that's again a challenge, right? You need to first maintain or have a registry of who has which device, which device has what OS and which user has what version of your app. Then a bug comes, you fix it and you need to know, I need to send this updated version to this particular user. So how do you track it? Try using an Excel sheet for that. I think you'll give up in a week's time because it's very tough. The updates are so frequent. So that is our app distribution I think comes into picture and there are wonderful frameworks available at the moment. So one of them I've actually tried out is called Test Fairy. Anyone has heard about Test Fairy? Cool. So you guys, I'm hoping you've actually used it and I think it's very, very useful and the beauty of it is it's just free. They just give you so much of capability and it's free of cost. So you register with them, you give your app and they kind of take care of the distribution and the maintenance aspects of it which I think is just wonderful. It's a fantastic service they're giving. So that also allows you to take care of fragmentation to an extent because they're kind of managing your devices for you. Next thing is response times. So I think even today somebody from Hike was talking about flaky networks. How do you handle this? So they were talking about you need to build apps that cater to this. How do you handle this? Now that is a very big challenge, isn't it? So even from a comment from one of the earlier slides which said there was an app that was not working on 2G and sometimes unstable on 3G kind of thing. So you have services there which are relying on network speeds. How do you handle this challenge? So the diagram I have here is a statistic that I've managed to obtain again from the internet which kind of normalizes the network response times across the world. So the concept here is that if a response takes one second in the US, it takes 0.8 seconds in Canada and unfortunately 2.1 seconds in India. So our network is definitely not the best in the world but we do have a network. So problem here is you might be sitting here in India and you're trying to build an app that is probably to be used in Europe or US. And how do you ensure that this app is going to work there in the kind of network that they have? One potential solution is use simulators. Anyone has tried SOPI? Anyone has heard of SOPI? So it kind of allows you to mimic backend responses, right? It's a dummy thing but at least you can actually test the responses that you're going to get. SOPI also supports REST I think pretty recently so it's pretty good. If not build your own simulator, it doesn't take much. We've actually built one ourselves, a REST simulator and it just works beautifully. It allows you to do a lot of stuff. So it really opens up a lot of opportunity for you from an app testing perspective. Other thing you need to do is mimic the delays. So once you have this kind of an understanding, so testing is not just about open the app, click, click, click, click. No, not working and give it back to the developer. It's about thinking a lot, thinking ahead. And there you need to kind of factor in this kind of capability, mimic these delays. So if you're saying I'm going to set up two URLs, fire one with a delay of one second and another one with a delay of 2.1 seconds, you'll see how your app is behaving in India and probably in US. One more solution that I have is leveraging mobile cloud testing. So you might ask me how this works. So one of the capabilities which I found very useful on mobile cloud testing was the fact that they actually allow you to run your app on a network which is in the US or Europe. So they have type of mobile operators. You can actually run the app on a network that is live there. So you get a feel of how the app would respond were it being used by somebody in some other country. So I think that's again a solution. So we looked at three broad categories of problems. So I know I can't kind of address all of them together in one shot, but definitely there are solutions to all these problems. We need to invest time into solving these problems so that we can build something that's very good. So one last thing I think is planning. So we all talk about planning. We all have big plans for doing everything, whether it's going out for a movie or building an app. So we need to look at our plans. We need to ensure that we are addressing our problem, not somebody else's problem. We need to be very sure that we are facing the same problem that somebody else has before you try and solve that. So always try to solve your problem, identify the right tools that will help you solve your problem, and identify the right flows. So when I say right flows, you pick up a lot of tools and typically why people discard them is the fact that they start looking at this tool is not working. It's too much of an overhead to kind of learn this tool. I think that's where people go wrong. They need to look at how these tools can effectively feed into each other, how they can help each other and how it can increase your overall productivity. And that really helps you a lot in building something that's really good. And tomorrow when you get feedback about your app, your entire next release cycle is kind of made much easier if you have all these tools with you. So when we looked at all these challenges, so I mean there were quite a few more, it's not that there are only three problems with the mobile app, there are probably zillion more things that you need to handle. But when we looked at all these, we looked at at least attacking these three or four very big issues that are plaguing people which we understood is not just within my team or my organization, it's across the industry. And I think I am justified in saying that we were right because yesterday and today's talks, people were talking a lot about these issues. So when we looked at identifying a solution, we looked at three important factors. One is simplicity. We didn't want to invest our time in learning something that was very difficult. People just get put off whether they already have enough of pressure to do stuff. So they try and want something that's simple or easy to adapt. We wanted to test what goes live. So I'm getting to a test automation suit here. So a lot of suits, a lot of stuff that we evaluated like something called monkey talk. Anyone heard of monkey talk? So a lot of you have probably used it as well. The challenge with monkey talk is the fact that you need to kind of inject additional code into your app for it to be running, which I think is a challenge. I don't want to do that if I'm building an app, right? I want to do something that is cross-platform. I think that's very important. We are a DroidCon, I know, but the unfortunate reality is that we need to build apps that probably work on iOS and Android, right? So I don't want to be rewriting test cases that are going to be running only on Android or on Windows. I want to do something that's cross-platform. And I think this is pretty interesting and something we felt would save a lot of time, scheduled and automated reporting, which is kind of nice to have things. Cost, cost is another deterrent. A lot of initiatives get lost. We wanted something at a simple open source and we didn't want to have a dependency on anyone tool when we built a framework. So when I say framework, get me wrong. We've not built something that's like going to change the world tomorrow, but we've just used things that are available there. We wanted to customize it and use it to solve problems. So we wanted to be built in such a way that tomorrow should that tool not exist. We don't have a dependency and a framework goes for a toss. So we've purposefully used a framework where you can just pull out stuff and put in something that you are comfortable with. And one last thing is you always need to buy in from your managers when you're doing something, right? Although they're just going to say, no, just write code. I don't care about automation. Just write code. That's not going to work. So you need to show them what is it that they will get by doing this. So we kind of put in some terminology around Peter on the bush and tell them that they would get X, Y, Z. But actually when we actually put in everything, we're actually able to show this. So this is from the way we have implemented this framework. And I've done these stats from probably across three or four months. So I've actually seen that there's been an reduction in the effort. And we were actually able to put together a pretty neat regression suit that is actually reducing my regression effort by around 70%. So these are apps. This is apps that we were using internally. And I know that these stats are right because I've been able to measure them hands-on. This would vary depending on how you're using these tools, how well you've adapted something in your environment. So before I get into a demo, one last statement about Appium. We used Appium as a test automation framework. Anyone has heard about Appium? Very few hands, okay? So that's something I think you can really look at adapting within your organization because they kind of gel well with the principles that we had in terms of being free. They're actually, they're a very strong open source and a very good community. They have releases once in two weeks, so I think that's pretty good. They kind of don't reinvent the wheel. They kind of use APIs that the frameworks are giving them and they try to use that instead of building something new. And it's quite easy. They give you the ability to program in a language that you are comfortable with. Java, Python, Ruby, there are quite a few there on the website for checking out. So what I'll do next is tell you how we have used them. We've used Git as our code repository. We've used Jenkins as a continuous integration tool. We use Appium to kind of distribute the apps onto the devices and we've drawn out some reports on Jenkins. Pretty simple, straightforward, but very effective for us. So we'll just jump to a quick demo. So what I'll be doing now is, this is my mobile screen. This is the Appium console and this is Jenkins. So what I will be doing now is I'm just mirroring my screen. And fortunately for some reason, my screen is not getting mirrored here. So I'll probably have to run it on my, then I wouldn't be able to show the screens as well. I want to show the other stuff. So it's a little, I think it's got something to do with the resolution. I'm sorry I wouldn't be able to show you that, but I can definitely run something on the screen and you can see this in action. So what I've done is we've got a simple action. So at least those of you sitting in front can see my device. And I can assure you that I'm not doing some kind of drama here. So what I've done is I've set up a simple job on Jenkins. I've got this app where I will just, I've written one simple test case, just for a demo purpose where I just log into the app and validate that I'm logging in. So what I'm doing is I just scheduled the Jenkins build. So there you go, the build is, so what it does now is it is actually connecting to Appium. You can see Appium is trying to make a connection. So Appium internally uses Node.js to kind of connect your device and send signals to it and run commands on it. So we just give it a couple of seconds. It's right now trying to connect to my device. Just a second I'll just, I think my device is not getting recognized for some reason. So while we wait for it, so what Appium actually does internally is, it is using ADB to talk to the device. You see here, it's actually run all the test cases fine, but for some reason it's not showing up on the app. So it's actually said that the test failed for some reason. The test was try and login with something so it actually gives you details about what has failed, error message, blah, blah, blah. So probably if I just take it off, it'll just start working. I can try one last time, but actually trying to run the, so it's basically running the test case right now. So it's trying to open up the app right now. So while the way we have done this is each time, it's still having some issue connecting my device. I'm not sure what the problem is. So sorry, probably once you guys can, I'll be outside, I can just show you this demo. I don't know for some reason it's just refusing to show up here. So that's pretty much what I wanted to talk to you guys about and I can assure you this thing works because I just tried it out here in the morning as well when they're testing it. So that about, I'm pretty much done what I wanted to cover. So before I take any questions, I just want to thank two people from my team who've really helped me set this up and get it going. So Pradeep and Anup, I know you guys are sitting somewhere here. Thanks a lot for your help to get this up and running. So I'm open to any questions that you guys may have. Sure. Actually, my main question is how you are doing inter-app communication by using Appium. Sorry? Inter-app communication. Inter-app communication. Now we haven't tried inter-app communication using Appium. So the whole purpose of using Appium is to kind of connect to your app. So it depends on how you want to try to test something. So you're trying to test the app. So if there is a functionality that requires communication between this and app one exposes a way of communicating, then you can definitely find. So the test case I've written is actually using JUnit. So if there is a way to call it from JUnit, we can just go ahead and use it. One more thing. Now you ran the build, but it failed. But still you're showing build successful there. OK. That's a configuration thing. If you actually go back here, I just kind of set this up on my laptop and I just said some stuff here. So you can actually change this message around to do something. I just said, OK, whatever happens, just show it as successful. You can actually go and change it. This is a Jenkins thing. It's got nothing to do with Appium. So I've just not configured my Jenkins properly. Hi. Yeah. Thank you to the really informative session. So I wanted to ask a few things by, let's say, taking an example. So let's say I'm writing an app which, for instance, counts data that has been run from the device. OK. OK. Which app, very simple app, which app is using how much data or things like that. OK. So what, so does this fit well? See, what I would want to do in such a scenario is that for Appium, at least, to go to my app, let's say, click on a button that says Start Counting. OK. Let's say I have a functionality Start and Stop. OK. And then after that, I want to maybe run a big download from some server, from some server on the web. OK, some 100 MB file that's to be downloaded. And then I do a stop counting. OK. OK. So can this be done using Appium first? OK. So first thing is, so you can do testing in two ways, right? So one way is, so the way we have adopted is, we didn't want to do a very intrusive testing. We wanted to get the APK file built from somewhere, and use it as it works. So we just wanted to go click, click, click as a user would do. The other way is, you have the code with you. So once you have code, you know the kind of APIs that the code exposes, right? So you could actually refer to that and write your test cases to do what you want. So yes, to answer your question, yes, it is possible. But you might need more details about how your app is working, what kind of interfaces the app exposes. If you are aware of it, yes, you can definitely do it. OK. So basically, I can integrate Appium into my app at build time? No, you can't. Appium is an external entity. You can't kind of use Appium. So what I was trying to say is your app is, if I understood your question, that your app connects to somewhere, downloads some stuff, you want to see how much data has been used. No, no, no. It just counts how much data any of the apps are using. So basically, I want to start counting. Then I want to, let's say, go into the Chrome browser. I have to open a link, which is a sample download from some file from the internet, which is, I know it's a 10 MB file. And after a successful download, I want to, or let's say, after 10 minutes, I want to come back and stop my app and see what is the counter that it has reported. So yes, so you use some kind of API that, say, the Android system would expose to get this data. You can call the same API using Appium. It'll do the job for you. But I basically want to do my call. I'm really sorry, but we'll just have to cut this short. He'll be available on the talk funnel app. So please go ahead and ask your questions there or take it offline to the speaker lounge. Thank you.