 Hi all, good morning. I have with me, Shama Wazali. She's joining from Pune. She's our speaker and talking on automating the real user scenarios across multiple apps and multiple devices. This is a question, right, of what we have. How to automate that scenario on the mobile? For example, how to interact with that alert on the browser? How to interact on that post message what we get on the mobile device, right? And interacting and accepting that is tricky as well. Let's hear from Shama. Hey, thank you so much. Good morning, good afternoon, good evening, everyone. Based on where you are joining from, let me quickly dive and give you a quick introduction about myself. I have been working as a senior consultant, QA at XMCO. I have more than 13 years of industry experience. Mostly focused on QA automation strategy, planning, and enabling the teams with quality processes. I'll quickly move on to the topic. Let's take the first use case. How many of you know Zoom? I just want to make sure that this is an interactive session. I'm expecting you to quickly answer in one word or just give thumbs up. And I'm looking at the chat right now. I just want to know how many of you know of or have used any meeting apps like Zoom or it may be WebEx. I'm using Zoom right now for this conference. And I see more than 40 people joining this session. So yeah, I mean, quite a few are at least with today's word like where we are working from home and everything is remote. I'm sure one or the other app is being used by you. So in this case, let us take an example of Zoom. Can you tell me how Zoom is used for team meetings? And right now I'm using it for the conferences as well. So conferences it is used, town halls. We have a lot of people joining. So basically we have multiple users in the same meeting. And what exactly do you do in these meetings? Collaborate, you do the collaboration. What else do you do? You're sharing information through audio, video, right? Someone says killing time. OK, so what is it used for? Sharing screen files as part of the meetings, discussions, presentations. Right now I'm presenting my screen. And what do you think the most common and core use cases are? Of course, multiple users joining the same team and collaborating with each other. Now let us see where these users are joining from. How do you access these kind of apps? It can be your web browser. It can be your phones, Android, iOS. It can be desktop apps as well. I'm sure a few of you are using from web, a few of you are using from your desktop apps or some of you might be also joining through mobile. So now let us see how do you test these kind of applications? Even if you have to test it functionally manually, how can you test these? What are the important scenarios? Definitely you will not want to test the single user where the user is logging in. The user is hosting a meeting. You want to definitely cover the core part of it, which is having multiple users joining the same meeting and interacting. You want to test that. And you want to make also sure that all of these users are able to join from different platforms and still they are able to collaborate seamlessly. Now in this case, this is about testing it manually. Though it is pretty time consuming, let us say we accomplished that. But now can you think how can we automate these scenarios? Because with every single build, I cannot be sitting and validating all of these scenarios across different apps, versions, platforms, and meetings, different kind of meetings, different kind of personas, a host, a guest, a participant, a conference, and a lot of meeting controls. So that's not pretty possible for me to do it manually. So definitely I'll have to automate. Now, how do I automate this? So if I have to automate this, how can we automate these multi-user scenarios? Quite challenging. So let us see how did we solve this problem. The tech stack, I'll talk about the tech stack that I'm going to discuss in this, and I'm going to demo. The tech stack that we have used to solve this problem is using Java. It's a Java based solution. We have used APM for communicating and collaborating with the mobile apps, automating the mobile journeys. Selenium 3. Specifically, I'm mentioning Selenium 3 because this is not yet bumped up to Selenium 4. We are working on that, and soon you'll see some updates coming in as part of Selenium 4. And we are using a framework called as ATD, which is APM Test Distribution. This is used to manage multiple devices. And the cucumber to write the BDD tests. Peak Cloudy Integration for Mobile Infra is what we have implemented, but we have also made this compatible running on browser stack or headspin and source labs. And you can name most of the major of the device firms are supported. And then we have created a custom framework called as TestVis. And this framework is created to cater to the multi-user scenarios, where this is focusing only on orchestrating between multiple users across different platforms, managing that and doing the collaboration in a single test. Let's jump into a demo. So let me first show how a typical single user scenario would look like. I don't want to directly jump into multi-users, but there are some single users as well. If you see this test case, it is pretty readable. And the reason behind this using BDD and not using examples and so on, because anyone can come and ask me that why is it not implemented using the data driven using examples. We don't want to do that because of the readability. So this particular scenario is a single user scenario, where I say that given I am able to log in. I'm trying to log in with invalid credentials. And these are my credentials. Then I again try to log in with another invalid credentials. This is a single user. It is a single app. There is nothing fancy here. And you will also see certain tags here. What are these tags? So this particular test case can run independently on Android as an independent Android test, independent web test. And this is the app that we are running it on. And this is some. So these two are some additional annotations that we have given to for us to be able to execute a different scenarios group together, maybe. But what you can focus on right now is this. All I need to do is if I implement that for Windows, all I need to do is I have to give the Windows tab. So no more core addition, nothing. I don't have to really do anything. Now let's come to how can I actually automate a multi-user scenario. So this is a scenario. There are two users involved. Of course, I'm not going to use Zoom app or any meeting app in this because I do not have a developer sign build with me. So I'm using an app called as the app, which I'll show you shortly. And I have two users. It's I and it's you. I am on Android. I am launching and working on this application and Android. And you are on web platform. By default, we use Chrome, but you can also mention Firefox and so on. Now, if you look at this, this I and you are the user personas. And these two web and Android. It can be Windows and iOS as well are the platforms. And once you mention the platforms in the given, you need not to mention it every single time. After this, if you see, I'm saying when I log in again with the invalid credentials, you log in again with the invalid credentials. So in the first test, it was a single user who was trying to do log in on Android or web if I run it on web, two different times with two different invalid credentials. But in this multi-user test, I have two different users on one on web, one on Android, doing invalid login. So let us quickly run the scenario and see how can I run this. So to run the scenario, I need to pass on the property files. So for Android, I need a configuration site. I need configurations. I need to tell where my app is and all the other attributes it needs for me to be able to communicate with it. So where do I keep all of these files? I keep these files as part of my configs. And each app has its own config files. In this case, it is going to be the app config. Along with that, I also have a lot of other configurations where I define where my log should go, whether I want to go through Proxy. Do I want to run it on my local machine or the doctorized environment on CI? And what are the target environments? What is the test data file I want to refer to? What is the maximum number of drivers I want to allocate? Or it can be for mobile or it can be web as well. So all of these configurations are mentioned here. And we also have the integration with app.ly tools, which is visual validation tool. So if you want to configure that, what is the configuration file I need to use, whether I need to put it on or off by putting it as true or false, right? So all of these kind of configurations are there. And it is in detail, it's documented. And I'll share the links towards the end of the session where you can go and have a quick look. So right now I'm going to quickly run this test. I have my two emulators up here. And this test case that I'm running, as you see, it is on Android and on web, okay? So yeah, my app has started getting installed on my, if you see this particular emulator, yeah, this is the app that I am using for this demonstration. So right now the first user, if you see here, the first user is I, who is going to do invalid login. So this is the invalid login, the first user is attempting. And in the same test case, I have another user who will log in through web. Now, the first user is I, now the first step as part of my Android app is already completed, right? I'll have another user who will start on web now shortly. We should see a Chrome browser instantiated. This is the Chrome browser, right? Still my user one is up and running. In the second user as my second step of my test case, my user on web has launched the app and is trying to log in. And this is also going to be a invalid login. So if you see both the users tried logging in one on Android, one on the browser, and this is through the same test. Okay, I guess we should be able to, okay, that is it. That's what the test case is all about, right? Orchestrating between two different platforms and two different users. And this orchestration is happening through testvis behind the scenes, right? It creates the drivers, uses these personas, maps it, and then drives this entire interaction. Okay, that's the test case. Now, let's go back and see how the, maybe reports are populated. So I had put the appy tools as true. I wanted to capture all of these screens and do some visual validation, okay? So I'll go and see what's happening on my appy tools. As part of appy tools, there are only two settings that you need to do. You'll need to have your appy tools keys, which is set up as your environment variable. And you'll have to, in the configuration file, you'll have to put it as true. So that it will start capturing. And the piece of code that you, I'll just quickly show you one of the step where it is one single, I'll just say, okay. And so if you see, this is the only line that you need to capture these screenshots and send it to appy tools, okay? And check window is the name. You give the screen name, which is already as part of your class. And all you need to tell is what the screen represents. That's it. This is the single line of code that you need to write. And that will do all the magic. You will see your dashboard or your appy tools dashboard actually populating all of the screenshots, okay? And then you can see if there is any visual change. And you can, I mean, I'll not get in detail how you can actually go and what are the different ways you can capture and how you can actually run through this dashboard to validate that particular test or not. Maybe I'll leave it to this. Yeah. So that was about the multi user demo, okay? Then there is another use case, right? Though we have multi users communicating with different on different platforms. Now think about this. I may have different versions of this application. So the zoom will have, you know, frequently this is coming in, but I as a user will not update it very frequently. I might or might not, right? There will be few users still using the previous versions. So we'll have to validate with every single build also if the previous versions are working. And one of the valid case I can think of is think about there is some schema change coming in the new version, okay? And you might want to validate if the backward compatibility is intact or not. Like for example, there are few users still using the old version of the app. And there are new users who have, I mean, there are few users which have newly new version of the app and they have new schema changes, okay? And there is a meeting and there are different people, different versions, different schemas who are joining the meeting. They should have no problem and it should be seamless as well. Though you validate it from the backward compatibility test that you have on the data side of it, the database side of it and the API side of it and everything, you might also want to validate it as part of the end to end user journeys. How do you do that? In that case, do I have the same app as part of your, you know, tests? No, I will have zoom app for sure, but I have multiple versions of the zoom app that are multiple apps itself. So how do I orchestrate now the same journey with different users using the same app but on different and using different versions, right? That is one use case. Second use case, we all have some or the other apps which have, you know, multiple apps to communicate and complete one user journey. For example, let us take an example of any delivery app. For that matter, it will take Amazon because it is globally used. If you have all, you know, observed the person who comes to deliver your order has a mobile, you know, app who has a mobile app who is using that app. As soon as he delivers or she delivers he or she is going to go and update that app. And as soon as the update happens you will see the status updated on your app and you will get a notification. It also happens when you're doing the exchanges or returns for that matter. That person has to validate what, you know, product are you exchanging for returning and then approve from there only then the process will happen. Either it is a refund process or the exchange process on your app. You will see those notifications. So this entire user journey of refund or exchange has to go through multiple apps. Again, this will involve completely new different apps to be able to complete one user journey. So how do you now test that, right? Let us quickly look into a demo for multi-user app. And I'll just quickly show a test, okay? So, yeah, so this is the test. I've annotated that with secon. So here again, we have two different users. One is I, one is you. And one user is using a calculator app for the simplicity I'm using a calculator app and the demo app that we just used the app, okay? The name of the app itself is the app. So I have two different users on two different apps and they are communicating with each other, okay? So if you see there are different steps performed by these users interchangeably. So let me quickly run this particular app. Sorry, this particular journey. And to note is this particular annotation that we're using. It says multi-user Android. Multi-user Android means there are multiple users and both are using Android, okay? And in the previous case, I forgot to mention this. This is a multi-user Android web. That means there are multiple users who are either on Android or either on web. There can be multiple, okay, there is no restriction to how many users you may have. For this demonstration, I've just used two users, but you can have three, four, five users. And right now, I have configured this to have at the max five users on mobile and five users on web. It is configurable. You can increase the numbers, okay? So I'll go back to my test and see what my test case is going to do, okay? This is a multi-user Android. So I'll have my test case running both users on Android. Platform. So these are my two emulators app, okay? So the exhibition has started. I have instantiated the first one, which is the calculator app, okay? The first user is using calculator app and the second user, okay? It has instantiated that as well. The second user will have the app. If you see, you're the calculator on this side and the app on the other side, right? Let's see what's the next step. The next step is on the calculator app, I should be able to enter two. Yes, just a little slow for some reason, okay? And the next step is you press plus, okay? Yeah, that happens. Meanwhile, on this, the other side, if you see the other user on the other app is trying to log in. Again, it is invalid credentials. The user is trying to log in, okay? You get an alert. It is as expected, it is an invalid login. And on the other hand, you see the last step is you select five, that is you have to enter five now on the calculator app, all right? So this is how the orchestration is happening. Though I do not have the same app or two different apps, the delivery apps and all, because obvious reasons I do not have the developer signed APKs. I'm just using what I have and what I can use for the demonstration. But this is pretty much any app and any journeys that you want to run. And this is how you can actually work this way between two different apps, right? So that is it. Again, we are collecting the visual, we are doing the visual validation and we are collecting all the screenshot and sending it to our APTU server. And I should be able to now see the test is completed. Okay, some reason it has failed. Okay. Okay, something from the APTU side of it. Let me see if I missed something. Okay. I guess I was able to capture. Yes, I was able to capture, but there is something unresolved. Yeah, I have some differences. If you look at this, I captured some differences as well. Okay. That's about the demo overall. That's about how you can actually automate either multiple users or either multiple apps or both put together into the single test and you'll be able to automate the real world users scenarios that we have, which involves multiple app and multiple users in the same test case. This is again, open source. I have given the links in the references. Feel free, go use it. And if you have any issues, if you have any questions, you can report, ask us, we'll be happy to help you. And other than this, it has multiple other features as well. You can run it on CI. We have continuous everything, all the dependencies. And you can integrate that with any major device forms that we might be using. It may be P Cloudy or it may be Source Labs, BrowserStrike, Headscreen. Visual validation is done using AptiTools. It is a simple two-step. And we have done implement, I mean, we have integrated that with report portal as well, where you will be able to see real-time reporting and you'll be able to see that. And you should be able to orchestrate the tests. As of now, it is supporting web and mobile, it is iOS, Android and Windows as well, Windows desktop app and not the Windows mobile app. So yeah, that's about it at high level. It's open for questions. If you have any questions, you can take up. Yes, Shama, we have questions. So let me read out that for you. So first question from an anonymous attendee. How is the setup? How is the step definition mentioned for phone and web? How is the step definition mentioned for phone and web? Yes, so if you look at this example, okay, phone and web. So it has to be, you have to use multi-user, okay? And then if it has to be web and phone both, then you have this particular use case where you say multi-user, Android web. So that means few of your users are on Android, few of your users are on web. And then you have to tell who is on what platform in the given section. So given I who is on Android, given you who is on that. Okay, this is how you can define who is on what platform and this is the annotation that we need to use. Not annotation, this is the tag that you need to use and you are set. Okay, Shama looks like I lost the questions. Okay, since I rejoined, please go to the Q&A section and we can pick the questions from there. Yes, so the first question is answered. Then there is Harshil who has, can this be run for iOS and also together with web and Android? Yes, you can. Yes, you can. I just showed you one use case here, right? I have showed you Android, Android, Android web and I have showed you two users. You can have multiple users, one on Android, one on iOS, one on web. You can do that. The second question again from Harshil says, can it be run in Palin instead of sequential execution? Palin execution, yes, you can do. You will have to, again, you'll have to make sure you have a supported infrastructure so that you can execute and distribute the test cases using Grid. So right now we have containerized it and we have a Docker Compose here. You can mention how you want to use it. The third question again comes from Harshil. If we are running multiple test cases in this suite, be executing app and web cases separately at different speed or we'll wait for one test case to get over, no. Okay, this entire, this is one test case, right? So if the test case first step, let us take an example of Zoom. So as a host, if I host a meeting, right? So that is the first step that needs to happen. Only then the user will, let us say the next step is the user is getting an invite, only then the user will be able to see the meeting details, credentials and then be able to join. So first the user is, the host is coming in, hosting the meeting, then the participants will get invitation and participant one, participant two, participant three, they all join the meeting. That's how it is. So it's in sequence based on how the users need to, you know, collaborate as per the use case, okay? Let us take an example of your Amazon app as well. Once, let us say, let's take and return example. The delivery person comes in, validates if the product is intact and then confirms on the app that the delivery person has, then the or the refund will be instantiated and then you will get a notification. So the first step, the delivery app needs to be launched and they need to confirm the order, right? That it is intact and they should confirm that this is returned. Only then the next step will happen. So the first step will be from the delivery app. The second step will be from the customer's app, okay? So that's the sequence. I hope that was clear enough and I hope I got the question right. Then there is a question from Natesh Jain but how the sync between actions from two user will be maintained because other apps flow maybe depend on the completion of the first. Exactly, right? Only if this is completed all the validations that you have as part of this step is completed only then you will move on to the next step. If let us say the delivery app itself was not able to find your order that you have raised for refund then you'll not be able to move forward. In that case, you will not go to the customer's app and proceed further, right? So yeah, it has to be in sequence. It has to be confirmed on one app based on the user. Again, it is based on your user journey, test case and the validations you have as per every action that you're doing and how it is orchestrated internally as a framework. You can dig deeper into it but it is managed by testvis itself, okay? There is a class called as drivers who actually manages the orchestration. The next question, can Rital says, can we use single emulator for both apps? Single emulator for both apps if you have to do it differently but this is not, again, this is not a single user or the demonstration that I have done involves two different apps communicating in real time. If it is not real time, then I guess you can do it that way as well but that does not fall in multi-app. This use case is completely different Rital. If I'm pronouncing this correct, the next question is from Vaideshwaran. How the page objects are implemented in multi-driver? Okay, I'll quickly show that. So this is the steps, right? Step definition. Create driver for is the method which takes care of what persona and what platform, okay? And let us say I have providing a details for signup. This is a BL, this is a business layer and this is the login method, right? Now as part of this login method, I have login screen. Login screen get based on what platform this is on the appropriate drivers get instantiated and enter login details. This is implemented for both Android and web separately. So this is where the segregation happens, right? Here is your implementation for Android. And similarly, here is your implementation for web, okay? Similarly, if you have iOS, you will do iOS. If you have a Windows, you will do Windows. So the implementation will start segregating once you reach the screens. I hope that was clear enough. What about reporting like if one step failed for a certain platform will fail entire test case? There are two types of tests, right? It is soft assertion, hard assertion. So I don't want to get into so much detail on those aspects, but I'll quickly show. We are using soft assertions. If you have multiple assertions to be made as part of that flow, and it is not a breaking step, like if example, for example, if the refund itself and the product itself is not showing up for refund, then I cannot move forward with that journey, right? Does not make sense. So I'll use hard assertion there. Otherwise I'll use soft assertion where I need, now where I have multiple validations and it's okay if they fail, I can capture those failures and I can move on to the last step. What, okay. Muktaar says sometimes why are the mobile tests case running slow? Is it due to? One is virtual device. Second one is I have also enabled aptly tools. It will take a few milliseconds or seconds also to capture those screenshots and send it to the aptly tools server. So that small delay you can count in as well, but it's not very huge or dramatic. Okay. Rital says, can we use two devices on cloud? Yes, you can. The integration for all the major cloud or device forms are there. You can just put, let me show quickly, there is a, so run in CI equal to false. That's the reason it was running on my local. If I put it as true, it will connect to, again, you have different property files here where it needs to pick in what capabilities file if it has to be on source labs or browser stack and so on, those capability files will be picked. That path is given to the capabilities and then accordingly it will start connecting to your device form and running the test cases there based on the capabilities. It's everything is depending on how the capabilities and how you have to find them, what devices and the device information. And there is one question, is Android Studio is the platform for testing this one? Android Studio, I'm using emulator. That's the reason I use Android Studio and launch my emulators. But yeah, if you have emulators up and running it should be fine. This library is available to work with Java only. Yes, we have implemented as of now and supporting everything using Java only. You're free, feel free to go ahead and extend and implement in any other languages you're comfortable with. So, Sharma, I have a couple of questions. Okay, so how would you decide on to pick the scenarios that you want to automate? Not to pick the scenarios. For example, okay, you said apart from this, like whatever we saw now, okay, in the demo. Say for example, okay, now I have to interact with toast or I have to interact with an app, okay, on an alert or something, okay, of that sort. So how do you decide, okay, I should automate, okay, this use case. So my use case is enough to go and to automate the interaction on the toast or alert. So do have any checklist, okay, that you refer to, okay, only if it falls into, okay, for your six, okay, checklist, yes, yes, yes, only then I'll automate. Otherwise, no, I'll not pick this use case to automate. Do you have anything of that sort? Okay, first of all, when we think about automation, it's not only about UI automation, okay. Automation has to be across different layers of the application. And if everyone is aware about, it happens to be according to test permit, which is the best practice and which is the right way of implementing it. So I will definitely not cover all the combinations on UI, okay. When I'm running the test cases on UI, it is always user journeys. If you see, there is no single test case. There is always a user journey and all these user journeys has to concentrate mostly on if we have any integrations, if we are talking to any third party tools or there has to go across multiple modules and so on. But rest of all the test cases or rest of all the scenarios, okay, can be covered as part of the API workflow tests. And below that, the component tests and unit integration, of course, but I should not take all of these test cases to the UI. The demonstration per se, I had those validations and all of that, but that is not the only use case that we have. That is just a lack of the more apps that we have so that I could demo some meaningful user journey. But a typical user journey for, let us take in first use case that I picked up for Zoom should be that the user is able to, a host is able to host a meeting and should be able to invite a few, it can go in their context or it can put email IDs and invite few people. They should get the invitation and they should be able to use that invitation thing and able to attend the meeting. And once the meeting is launched, I can validate whether the users are able to do that, if I'm able to, if I have certain control state, if I can put everyone on mute or unmute them, share the screen, if the other users are able to see my screen, if I'm talking and if they're able to hear me, or your video quality part is completely different, but this is a user journey. Only after all of this, I'm able to share my file, if I'm sharing certain files, I'm able to do that. And then I can exit the meeting and it should be ended for all of them. Or I can also add two or three steps there validating if I'm able to remove someone from the meeting, okay? And if I can give a presentation permission to some person and they are able to share the screen or not, these are pretty valid scenarios where I want to cover in one journey, but I will not concentrate on validating single component as part of my UI test. I'll concentrate on journeys, the rest of the things are taken care below the layer. That's how I approach. So the follow up question here is, when you have this thought process, how do you make yourself, okay, how do you separate, okay? I should do these sections at the API layer and I should do these sections at the UI layer. So what's your thought process when you separate those? Thanks, Steve. This is a little subjective in general. As I said, your user journeys matter. How you're spiling across different components of your application, different user journeys, screens, or third party integrations that you have, you know. And again, it needs to come up with this strategy which is custom to that particular application has to be done only if you have a good context of the architecture, the implementation, also the infrastructure, how it is communicating if it is distributed or not. In that case, do I need to really prioritize this particular test case but because the behavior might change. So all of these questions should be answered only then you'll be able to come with this appropriate strategy and you'll be able to appropriately distribute your test cases across all of these layers. So my experience and what I say is usually when we're set to automate at the office. So the, and one who is okay doing the automation for the first time they think everything in terms of, okay, automating at the UI layer. So most times we'll not get to, okay, we can do certain things at the API layer and also we can include the UI, the both. Yeah, so this is a mindset and this will end up being very tough for those who starts the first time here. So having this knowledge, okay, we can also automate at UI and API layer and you can collect those in the user journey. That's something very, the good lesson, okay, which you have shared on this talk show. And I see, okay, a question coming up, but it's not a question, it's okay. It's a note for you, no question. We've explained a very wonderful session on multiple multi-habs and device. You guys will definitely like to try out applications. Thank you for sharing that. So we had a very good talk and very insightful talk for the engineers. Thank you, Shama, for that.