 Welcome everyone to the session we have Ayanne Khan with us who is going to be talking about building an Appium driver to automate custom built hardware. Enjoy the session and over to you Ayanne. Hey everybody. Thank you to everybody who's watching this and thank you to everybody who's going to be watching this. I hope you guys have a good time and. Yeah I hope everything turns out well and I hope you guys are doing well, especially with everything going on in today's world. Today I wanted to talk about building an Appium 2.0 driver to automate custom built hardware. So let's get right into it. So just a quick about me. I'm a third year computer engineering student at the University of Waterloo. I just started my second semester of third year actually last week. So, kind of excited and also stressed. And over the past summer, I worked at HeadSpinIO as a software engineering intern. And during this internship I got to work with Jonathan Lips who is the project lead for Appium. And as a part of the internship, I got to work on primarily Appium related tasks. And I also did some contributions to the Appium open source project. Now just a little bit about this talk. So we'll just talk what I really want to do is kind of present a real life scenario that me and the team that has been dealt with and how we use. How we developed a custom Appium 2.0 driver to meet a project requirement. And the goal of that talk is, or I guess the goal of this talk is kind of give like a presentation of a use case of Appium 2.0. And just kind of an example to highlight slash showcase what Appium 2.0 can really do, what it's capable of all, and maybe some sort of inspiration for how you can use it for some of your own personal projects or tasks. So just to lay a bit of background for why we ended up using Appium 2.0 for the project that we worked on. I want to kind of give a quick intro to Appium 2.0 from my perspective. And from, from my perspective, essentially the goal of Appium 2.0 is to transition from being a server which bundles many drivers to one which by default doesn't build any drivers at all. The idea is that Appium 2.0 is that moving forward on Appium is what we're trying to do is be an interface for retrieving and using drivers. And the other side of that is it wants to give the community the ability or I guess make it make it easier for the community to create custom drivers for different piece of hardware hardware and really grow the list of devices that can be automated using Appium and Selenium syntax. And so what's kind of the effect of or the byproduct of this goal is that it becomes really easy and simple to write new custom drivers and plugins. And because of that, it becomes more feasible to add automation capabilities to different to different types of devices such as a PlayStation, a smart TV remote or an Apple Watch, etc, etc, or something that even you might build and are thinking of providing automation capabilities on top of. So now I just want to talk about the project a little bit. So, during my time it has been. We had a project come up that revolved around adding automation capabilities to a smart TV remote. And the automation capabilities included the ability to remotely trigger physical button presses on a remote. The ability to perform complex physical button press sequences on the remote, and the ability to simultaneously press to physical buttons on a remote. And the idea for adding these automation capabilities is that usually to test that smart TV remote. The QA team would have to manually press buttons, and they would have to manually test different combinations and sequences themselves. But with with this project, the idea was that you can, you can set up hardware on top of the remote so that remotely you can have many QA. Or many testers remotely testing the remote, but all, all the, all the actions were actually happening physically. So it wasn't primarily software, there's actually something that was pressing on the buttons and triggering responses so that you can actually get a live feedback of what happens when you try different stuff on the remote. And so, for the project, why did we choose app in 2.0. The reason we choose to develop a app in 2.0 driver to meet those automation capability requirements was because we realized that the simplest way to add automation capabilities to any sort of driver or sorry any piece of hardware was to create an app in 2.0 driver because we could then leverage Appian's pre built automation capabilities rather than building everything from the ground up. And also, with an Appian 2.0 driver sitting in the middle between the hardware control system and the client side it made it easier for it to write client side code since it only has and it made it easier to write client side automation code, since it only had to really interact with the Appian 2.0 driver and not some of the detailed control system code that was implemented. So now I just want to give a quick demo of the project and what we were able to build. So this is a headspin platform for testing. On the right side of the screen what you have is a graphical user interface and with the graphical user interface you can remotely control the remote and press buttons on it and perform sequences. And then on the left hand of the side of the screen what you see is a camera pointing to a TV and TV is there to gauge the output of the but of the input that you drive through the user interface on the right. So just going to play the video bit so initially when you start up, or I guess when you open up the user interface, you just have some setup info for connecting to a remote. And that's what I'm going to do right now so once you enter the information to connect to a specific remote or device. It's going to open up the remote layout. Now, right now this is the remote layout and it's showing all the possible keys on the remote physically. And right now there's two different modes currently we're in the simple mode. And during the while you're in simple mode. What happens is that you basically so in simple mode, everybody impress or anytime you press a key. It sends a command to the hardware. So it sends the, it sends a command to the hardware to basically press for a default duration on the hardware. The hardware will then press the button of the remote for that default duration of the time. So I'm just going to run through some sequences so here I'm pressing the down button, pressing the right button. And then as you can see the screen on the left is changing. But as you can see, while it's in simple mode, you can use the remote as you would physically and it just acts like, like it would physically. And then the other mode you have is the advanced mode. And in the advanced mode is where you can queue up a sequence of events you want to happen all at once. So if you're having to manually navigate your way through the, through the remote, you can just set up a sequence of actions you want to happen and then perform them. So here I'm creating up a sequence. And this involves pressing the YouTube button for 400 milliseconds, creating a pause for three, three seconds because once you press the YouTube button to take some time for it to load up. Then I'm pressing the down button, the down button, down button, the right button. Oh, sorry about that. Yeah, so here I just have created a sequence where I, where I'm basically telling. So the sequence that I've created is that I press the YouTube button for 400 milliseconds and I press a pause. I press the down button, and then I pause for 300 milliseconds. I press it down, but again, press pause for three, 300 milliseconds and I press the right button, then I press pause for 30 seconds and then I press select button. And the reason for those pauses are I'm pretty sure everybody's accustomed to some of some of some automation on mobile but it's usually to wait for a response to happen before you trigger a command so that your command doesn't get lost. And now I'm going to perform this. You've got five minutes to go. So as you can see, this is something that you can pretty much do. This is something that you can pretty much do using client scripts to now I'm just going to give a quick overview of the development stack for how we made this happen. So at the very bottom level is where you have the physical hardware. And the physical hard was so on top of the remote. On top of a physical remote there exists a piece of hardware is kind of like a button press or a robot, you send it commands. And with the duration and it will physically press that button on that remote for that for that duration. And then the hardware exposes a Python library. And the Python library is used to interact with the hardware using Python and send commands to it. And so on top of the Python library. There's a there's a web service written in Python tornado, and it maps the functionality of the hardware to HTTP routes and it's responsible for performing commands on the hardware, based on requests to those routes. And then on top of the web service is is the Appium 2.0 driver. And other than the pre packaged Appium capabilities that come with the Appium 2.0 driver and all of the packages. There's a custom send button or button press command, and the button press command leverages the Selenium actions JSON protocol to parse in a set of actions coming from the client side into tips which are then sent to the web service to perform almost concurrently. And then on top of the web on top of the driver is where you will have your user interfaces, or your Appium Appium scripts, QA scripts which you can perform actions against. And so, yeah, that's kind of, that's kind of what I wanted to showcase or demo for today. I'm kind of almost running out of time through more minutes left. We've got time for some questions. Yeah, let's do some questions. So the first one we've got here is what are some possibilities that may be considered for automating anything on television. What are some possibilities that may be considered for automating anything on television. Yeah, just I would like you. Well, I guess similar, I guess for. So I guess, maybe that kind of ties into the remote. So, I guess what you want to see is how, how does the television react to your different combination sequences or your button press sequence that you make on the remote. Okay, cool. It's kind of like a very open question, but I guess yeah. If a person asked that question wants to add some more detail at the bottom we can do you want to just finish your screen show and then we can see you full screen. Next question was how is how hardware is clicking the physical buttons are we using some robot. Yes, yes, we're using a physical button pressure type robot, and what it basically we need to send a command to it, it has a, it's kind of like, I guess you can think of it like a pencil. It's a pencil that's connected to, it's kind of like a pencil that's connected to each button. And when you send a command the pencil or the pointer goes down and it physically makes a press on the button for a certain duration so it holds down on the button for a certain duration. Right. Where can we download these drivers. Currently these drivers are closed source. But yeah, I don't doubt that maybe there's some other similar types of drivers for different remotes I think I know that Jonathan was trying to do something with Roku and yes and open source projects related to but impressive sorry. Automation on remote so maybe in the future that turns out to be more drivers similar to these but this one specifically is closed source. Why didn't you guys use an infrared transmitter to kind of emulate the remote would have been easier. I think it is in programming programming the signals over an infrared thing rather than the physical button pushes. Oh, because we're trying to simulate as much of the real world experience, or I guess the real button press itself because you're actually testing the remote itself. That kind of makes sense because when when QA get would people in QA would test the remote, they would have to manually press buttons to make sure it's working properly it's responding properly to certain button press length and it's basically it's for testing if the remote itself is working properly and the remote works on based off of actual physical presses. One last question then we'll have to go to the booths hangouts what which exact library are you using for the web services. Python tornado. Well that was a quick one. Well we'll call it quits there. Thank you so much for today's demo it's been interesting insight into using happy and with some hardware. Yeah, thanks everybody for coming. Take care.