 Good afternoon, everyone. No matter how much noise a thunder creates, this real action begins when the lightning strikes. So I'm glad to be part of the lightning talk. My name is Avinash Tiwari, and I'm one of the co-founder of pcloudy.com, which is one of the disruptor in mobile app testing space. So speed and scale, two keywords that we are hearing quite often in these days. And talking about speed isn't enterprises are playing the game of need of speed. I guess so, and primarily because of the dynamic nature of this industry. And in this world, Appium at speed and scale becomes important more than ever before. But while Appium is used quite widely across the enterprises, it also poses certain unique challenges. And I'm going to highlight some of these challenges and how pcloudy is solving it as a platform. So talking about the first challenge, which is about scalability. And what we have seen is enterprises have started making baby steps towards parallel testing across 5, 10, 20 devices. But is that enough? How do they take a leap from there and reach a kind of infinite scale? And that's where a solution like pcloudy comes into play, where it enables you to run your parallel tests across 10, 50, 100, thousands of devices. While it allows you to access a public cloud, which has a very large infrastructure. But we can also help you set up an on-prem kind of lab to allow these kind of parallel testing to happen. So what you see right now is just a glimpse of it. So once you kind of achieve the scalability, the second challenge that we see within enterprises is around identification of script issues or the app failures that happen when you start running those large scale automation runs. And a lot of times enterprises struggle. They wait for the whole run to be over to start analyzing it, takes a lot of time. And in the kind of DevOps environment that we are looking at, it's not the right way to do it. So what we have seen is enterprises moving towards some smart live reporting or results, which allows quick detection of issues. And we call it progressive reports. Progressive reports are nothing but real time streaming of results at various levels. So it starts with a very high level dashboard kind of data. And then you can start drilling down to specific results where it buckets result into different categories. So as soon as the runs are happening, you can see what failures are happening, which are the modules which are failing, which are the test cases which are failing. So rather than looking at the whole set of data, you can specifically drill down to a specific test case and a step. And that step gives you information about that particular moment of the step which happened. And you can start looking at various data points around your request response to logs, to videos, to screenshots in a very, very simple way. So this whole intelligent reporting allows detection of issues very, very quickly and which enables you to achieve the scale that you're looking at in terms of automation runs. So that's the second challenge which we're trying to solve. The third one is around simplification of script creation, which is a kind of ever-growing endeavor for any enterprise and any team which is creating automation script. And what we are trying to do is through the power of futuristic technologies like AI to enable simplification of the script. And one of the way to do is to where we have developed a simple AI engine which allows you to create an object model completely on its own with very, very simple steps. So basically what it does is it gets all the objects from your application screen and asks you to provide a feedback, which means you're training the AI engine about what this object is. So for example, is this a feedback icon? Is this a contact icon? And it has its own data set as well through which it identifies. But if you think that our data set is not correct, you can provide the feedback that this object has to be identified in a different way. So once you do that training, which is a one-step process for a particular application, you are ready to write scripts in a completely different way. And you can very easily mix and match with your existing APM scripts. So what you see here is a sample script. You'll all identify this as a typical APM script where you can start using the APM drivers that we are providing, which acts like a plugin on top of the APM, standard APM. And once you initialize the drivers, you can start writing the scripts in a simple keyword where you don't have to worry about the ID of the object or the XPath of the object. You just need to say type by click or pcloudy by click cart, pcloudy by click back button. So you are absolutely not worried about things like IDs or XPaths. We're just writing the script in a very, very simple English kind of language. And the beauty of it is you can mix and match this script with your existing APM way of working. So some of the objects which are working well with your APM, you can use that as it is. But some of the objects which are difficult to identify with APM, you can use AI. So this way you can blend it seamlessly within your APM scripts and start using AI to make yourself future ready. So these are the three challenges which we are focusing on right now to achieve APM at speed and scale. If you have further queries, we are present at boot number three. Feel free to drop in and just to remind there's a context going on. So feel free to participate in that context as well. Thank you so much. And today I wanna talk to you about how we can take our test automation to the next level. So many times we find ourselves coding test automation building blocks over and over again to deal with the same problem. Well, wouldn't it be really awesome if we had a ready to use solution with zero efforts needed? Kind of like a GitHub, but for test automation building blocks. So this is exactly where Test Project comes into the picture. Test Project is a free community powered platform developed to enable you to record, develop and analyze your test automation. We are built on top of Selenium and Appium and enable you to automate web, Android and iOS applications and even iOS on Windows, which is really cool. And most importantly, we enable to extend your test automation scenarios by using add-ons, which are the building blocks to your test automation. So let's hop on to our live demo and see how it works. So here you can see I have a mobile mirroring of my actual device, which is connected to my machine here. And you can see we can interact with all of the elements on the screen. And what I want to do today is automate the YouTube application. So I will enter YouTube and let's search for the Game of Thrones trailer. I'm sure everyone here in the crowd is familiar with and we'll just select the results. And now what I would like to do is validate that the number of views here is actually greater than one million. So you can see here that we actually have in this views element both numeric and non-numeric characters. So it would pose a challenge in terms of our validation step, right? Well, what normally we would probably do here is add a single line of code with a regular expression and that would work. But in our case, what I simply did is use an add-on. So an add-on is a collection of coded actions which simplify our automation development and it enables us to extend our automation scenarios and add-ons are basically shared by the entire community. And it's a way of sharing your building blocks and you can add them within your test steps here as building blocks. And in this case, what I did is use an add-on that removes all of the non-numeric characters from the views element and this way we can easily add a validation to our step that the views number is indeed greater than one million. So now we can execute this test and see how it works and you can also see it in my device here. It was launching the YouTube application and searching for the Game of Thrones trailer and that will select one of the videos that I searched for and it will validate that the views is greater than one million. So once we finish this, we can see the steps all past here by these green bars indicating its success and we can also analyze this even further by looking at the reports dashboard and we can see here step by step what happened, how long it took, even see failures if any occurred and we can even go further to look at screenshots of the actual step and what happened there. So in addition to all of this to all of you coding ninjas out here in the crowd I'm sure there are a lot of you, you can also use Test Project's powerful SDK and you can that way develop both coded tests and add-ons of your own and share them with the community and you can also export your recorded tests into code so that way you can also extend your tests even further and there really are no limitations so you can go ahead and try it out and I guess the last thing I wanted to say is that I really invite you all to come to our booth and learn more about how you can contribute to the community and be a part of sharing and developing add-ons and making this a greater place for all of us to enjoy. Thank you. Thank you.