 So my presentation will be about how we took Appium to 11 platforms. So just quickly about me, my name is Simon Granger, I'm from Canada. I'm actually French and Canadian. So I'm in Test Automation Prime at UITD. I've been there for almost four years now. Before that I was at Blackberry, so I was a team lead doing also automation, but for board level testing and manufacturing. So it was a bit of a shift in my career moving to be testing these kind of apps. So UITD is based out of Ottawa, Canada. So this is a picture I took last week, just before coming. Actually, no, it's from the winter. But this is actually the Rideau Canal and we converted into this longest skating rink in the world. It's about 20 kilometers long. And a funny story about this, I was telling this story to someone in London when I was meeting some customers and his question to me was actually, but how do you freeze the water? So in case you're wondering the same thing, below zero degrees, it frees on the sun. So just a quick table of content. So first I just do a quick introduction of UI Engine 1 just to give you some context. Then I go through our journey with Appium from zero support to having 11 platforms supported and all the pitfalls and issues that we had to go through. And then I'll finish with some demos. So at UITD what we looked at was trying to simplify the app development over multiple platforms. So typically what you would do is you'd develop for a platform once, then for a next, then for the next. And you do that all on different SDKs, different languages. It can become quite the nightmare to manage and it becomes very expensive as well. So how we solved it is similar to what the game industry did. So we created our own C++ engine. And C++ being a portable language, we were able to port it to multiple platforms. So our SDK allows to develop apps for multiple platforms with a single code base. So we developed once and then with the push of a button, we deploy the packages for all the platforms. So although you can see the different layouts depending on the platform, it's all the same code that runs beneath. So here's just a quick list of the different platforms that we support and this is some of our customers. And we are focused on the TV market. So it's pretty much TV streaming apps that we focus on. So our journey with Appium, so it pretty much started this way. So when meeting with potential customers, a lot of them end up asking us if we supported a test automation framework. And actually they specified Appium itself many times. So for them it seemed like it was a mandatory checkbox to be able to use our solution. So without support of Appium, it was much more difficult to convince these customers to join our SDK. So what we did to start was we tried to run Appium with our solution. And what we ended up seeing was it appeared as a black box. So the source tree was pretty much empty. And if you have an idea why that would be, because we don't use the native SDK. So every element that appears on screen is built by our engine. So because of that, the native automation layer cannot talk to our app and can't detect the source tree. So from there we decided to investigate a little more about Appium and what we found was open source, it was scalable and it was based off of the WebDriver protocol. And then just to break it down a little. So with Appium you would take these different automation layers and abstract them with that single interface, which is WebDriver protocol. And that's done with the Appium server. So each platform would have its own driver and it would receive the WebDriver protocol command and then translate it to the appropriate command for that automation layer. And then to write your test scripts you have these client libraries which is pretty cool. With Appium you can choose multiple. So I listed three here but there's a lot more and you can develop your test scripts with the language of your trip. So knowing this we decided to do this proof of concept to try to see could we actually make it work. And the two areas we had to focus on were the driver side. So in Appium server we created our own driver and on iOS we would create our own automation layer from scratch. So to create our automation layer the main thing we had to focus on was first we had to create a socket server to be able to receive and send all the commands and statuses. And then we added message handling for all these commands. And then finally we had to add the different commands and then create all the hooks inside our engine pool. So some of the commands we wanted to start with was obviously the get source tree. And for that we actually had to translate our scene tree from the engine into XML source tree. And then the other command would be like find elements, click and screen capture. On the driver side so what we did is we looked at the existing drivers and we took basically the iOS driver, stripped it down and started adding our stuff to it. And what we had to add here was the socket connection to our server when we established the session. Again the message handling and then the same commands we had to add on the driver side. So this would be the path of a normal command going from the test kit. So it would come from the client library then to the Appium server and then we would have caps actually defining that it is UI engine so it would know that it's UI engine driver it needs to talk to and then it would go to our automation layer. On the way back we would follow the same path. We did our refine that some of the original commands from the iOS driver would work with our solution and those would be more the system level commands like installing the app, launching it, the net setting network connection, key inputs, stuff like that. So what we decided to do was try to leverage those so we did this hybrid solution where when we launch a session with our driver we actually launch a second session with iOS driver. And inside our driver we would create a list of proxy commands and once we receive that command if we see that it's in that list of proxy commands we would redirect it to the iOS driver and then that one would talk to the XC UI test. On the way back let's follow again the same path. So our proof of context was actually a success so we were able to install and launch the app with the proxy command, interrogate the app and find different elements, make simple interactions, we were able to click and finally get a screenshot. So with this now we decided to go with this minimum viable product. So from that little Lego car now we're going to this basic car but it can still get you somewhere. So where we looked at to do this, we looked at the WebDriver protocol, the whole list of commands and we created a subset where we thought was the most valuable to start with. And for this MVP we also wanted to support Android and iOS. And for creating this commercial solution we had to solve a lot of issues we ignore in the POC and this is where actually the Appian community was very useful. So we initially posted a question on a forum and Jonah who's actually presenting beside us ended up giving us some direction and added us to the Slack workspace and from there we were able to ask a lot of questions and get help and they are a very active community and it was instrumental to our success. So we eventually got our solution ready in a matter of weeks and everything worked well with our simple test app. When we decided to test it with a more complex production app it kind of started looking like this. So it didn't go as planned. So we noticed there was a lot of misclicks that were happening. We had a hard time finding locators for the elements we wanted to interact with. So during this phase I was basically hovering between my test scripts, our driver and our automation layer to try to find the perfect solution to solve all these problems. So there was a few of them that I had to solve but I'm just going to enumerate a few here. So one of them was mixed clicks due to coordinates. So our definition of click that we had originally defined was the click in the center of the button. But what would happen in a scenario like this was the button is partially on screen. You should still be able to click on it but we would send the coordinates off screen and then miss the click. So how we solved this is we added five or we added four extra targets and before clicking we actually checked to see if one of those four targets could click on it and that would be the set of coordinates we ended up using. So another issue we had was the difficulty of finding a unique locator. And a lot of our apps where we found developers would use the same name for class name and ID and if we wanted that specific poster it was close to impossible unless we used fine elements and then use an index but we didn't want to do that. Because of the way we create our source tree it was difficult to use expat but what we did do is we inspired ourselves from expat and added an attribute filter to all of our search strategies and with this once you wrote your selector you could append at the end between brackets your attribute name and attribute value and that would allow you to pinpoint a much more specific element. So we eventually got all our issues resolved. Now our vehicle came from that Lego to that Doom buggy and now it looks a little more refined. We ended up getting our driver published in Appium which we didn't know would be accepted at one point because we are kind of a closed system but we were really happy that we got in. So we continued making improvements we added new commands we added support for having this stopped and then these guys came back our customers then were asking well this is really nice that you have support for Iris and Android but you're advertising that you support these 11 platforms can I test on these? It would be difficult without that. So now to solve this we're going to zoom into our automation layer. So for each platform that we added support for our engine we had to add this socket server. So for iOS we had a socket server and Objective-C for Android we added one in Java. And then if we wanted to add for all these platforms it would be quite cumbersome complex and you'd have to hover between all these different languages and SDK. So the good news is since we had started working on Appium inside our company someone had created a TCP socket implementation that uses the DSD TCP socket and abstracts of the system level implementation. So having this we were able to create a socket server using that single implementation and then give us this universal socket server. And basically that ended up opening Appium to all the platforms that our engines were. So now our vehicle migrated to this altering vehicle. And the first thing we did was we said okay let's try running our test scripts as it seems to be on all the platforms. And actually to our surprise they ran and they passed on the first run even on TV apps which don't support touch. And the reason that is is because it's all the same source code actually they all inherit the touch and click implementation. But the problem is, or it's not the problem that's not the way the customers want to use it, right? In the real world, okay. And the one thing missing. When we were able to run it on all the platforms we did have to launch the app ourselves. So to have a fully automation solution we have to add all these installers inside our driver to be able to run it on Jenkins Pipeline and all that. But if there is an implementation of the driver in Appium like there is one for Tyven that is in beta right now we could piggyback on it the same way we did for iOS and Android and create this hybrid solution and then reuse this driver to launch it so we don't have to add it inside the driver. So now we have this single automation layer that works on 11 platforms and because it's all the same source tree on every platform we can use the same selectors. That means like we saw previously is we can use the same test scripts without any modification from one platform to another. However, for TV apps we want to change the way we navigate the app to mimic how a user really uses it. So in the TV app instead of touching and click it's focus and then you have to press your remote control or controller to navigate. So we started looking at finding a way to send these keys and we kind of had a hard time. There was Android that supports this send key code but we couldn't find it in equivalent on iOS and even definitely not PlayStation or any other platform. So what we decided to do was use the send key command and the send key command is actually meant to be used to send text into a text input field. So it's meant to send keys to a specific element and it actually also supports keystrokes from your keyboard so you can actually send arrow keys and enter keys which are a navigation but it's not handled as an interrupt, system level interrupt it's really handled inside it. So since we own the automation layer we create a special case where if you use the send keys on the parent element of your scene tree we would be able to detect it on our automation layer and then send it out as a system level keystroke. So once we did that we actually ended up being able to navigate with the navigation keys on every platform that we had with our solution. So the next thing that came up was React Native support. So inside our company we decided to add this extra layer on top of our engine to give developers two options to write apps. So one was C++ the original one and then React Native which is more common in many of the customers that we worked with. So we weren't sure if that would affect our automation layer but the good news is since the automation layer is in the engine and React Native is just on top it actually worked from day one and we actually ended up making use of some of React's attributes so React has this thing called test ID that you can use when you write a unit test and you would add that to an element you want to interact with to be able to find it more easily. So what we did is we actually published this test ID inside our source tree and we added a search strategy to be able to find it and I'll show you how it works. And a cool thing with Appium is since we had moved from C++ or we still are C++ but we added React Native which uses JavaScript inside our company we kind of focus now on these two languages. So what we were able to do with Appium is actually move from using the Appium RubyLib to WebDriver.io so we would be much more focused on the languages that we want to work with. So now we have this really good solution but we have all this work to do to get it to the next step. So one of the things we have to work on is update our driver to the W3C WebDriver protocol. So I think it's been announced like over a year now but we haven't done it. And right now we're starting to get all these differentated warnings. So that's going to be something I'm going to jump on very soon. The next few things were adding the install scripts for the remaining platforms. So we added some of them in our driver already for some of the 11 platforms but not all of them yet. We're going slowly because we want to make a full regression on each platform once we add them completely. Next was WebView support. So we currently don't support WebView but our customers are using it so we definitely want to investigate and add that support. And then finally it will be adding the missing Appium command. So we started off with a small subset with the next spamming it over time but we don't support all of them yet and we want to eventually get all that support. So just a quick recap. So at UITV we built this cross platform engine that can run on multiple platforms. We've built this automation layer that follows the engine and a driver. And every time we add a new platform so actually we did add WebOS recently and once we added it I just went and tried one of our test scripts on it and actually ran the first time. So basically we have no support almost to do on Appium when we add a support. It's all built into the engine and it gets ported to every platform that we have. And a cool thing actually I worked all night trying to get this to work but our engine actually can, we can use it to build games. So I wanted to try to get a Hackday project that someone did a few years ago to build so I can actually show a game being run with Appium but it was using an older version of the engine and I had all these build issues with it. So anyway, the good thing is we can use the same test scripts across all the platforms. It makes writing the test development really easy for our customers. And right now we support 11 platforms. So before we go to the demo, so this is just a few resources so our driver is open source. So it's on GitHub. Go have a look. And the second link is our company website. So if you're curious about UI Engine and want to know more, go have a look. And you can also ask me after. So for the demo, the first one is a video. What you're seeing here on the screen right now on the right hand side, our test suite is being executed. On the left hand side is the communication between our test cases and the Appium server. I want to point out that these test cases are platform agnostic and they do not depend on a specific platform. So we're running the suite on a TVOS right now and the main functionality of this test suite is to iterate through the different posters of the app. As you can see right now, the static UI elements, the presence of the static UI elements on the app and the behavior of the app and the flow that's going through. I want to reiterate on the fact that these test cases are platform agnostic and they do not depend on a specific platform behavior. So we're running the same suite on Tizen TV Android Hansen, Apple Tablet and Mac OS, all testing the same behavior and functionality. Now we're going to go to some demos. The first thing I want to show... This is the Appium server. And... Okay. So I just want to highlight here, these are the capabilities that we use for everything. So, automation name would be UI Engine for us, platform name. Here we're not using the Mac driver. There's a Mac driver. So, if we're not using it, we added the YI in front of the platform name. And to be able to connect to our socket, we need to have the IP address of the device. And what's cool about it is the device, once the app is on it, you don't need a cable. It can be running from anywhere. So here we have just local OS on the Mac. Launch the app. This is one of our TV apps. So, as we mentioned, you can navigate with keys and you can navigate with gestures and clicks. But what I want to show you is an Appium desktop. So you guys all know about Appium desktop. You've used it before. I thought it was pretty cool. That was one of my ideas to actually get it running. Everyone at the company were, like, super amazed about it. So what's cool is when you hover over things, it actually shows you the different elements. So I'm going to go and select one and then go and find it here. And then here you have all your attributes. Here we see it's not hitable, so we know this is not the button. So I'm just going to go one above and now we see this is the button. So if I want to interact with it, this would be it. So I don't have a unique selector because multiple buttons are actually using the same class, the same name, and the same ID. So what I'm going to do here is I'm going to go in the code and add a test ID. And what I'm going to do is all the posters seem to have a unique title. So I'm going to use the title as my right to state. So this is where we define those buttons, the image that we have selected earlier, and this is where the title is. So I'm going to just take this text here and I'm going to copy it to my button. And I'm going to rename it to test ID. And I'm going to save it and look at the wrap in the background, you'll see the app just got reloaded. So now we're going to go back and I'm going to refresh this. And now we're going to select the poster. And if I select it, now you see our test ID appears in our attribute. And now we have this unique selector. This was grayed out earlier, so now I can actually interact with it with the poster level. So the next thing I want to show was just that we can navigate with Appian. So if I swipe, I can swipe like this. And if we do a key navigation, because I mentioned you can navigate with the key. I mentioned we would use this sun key, right? So I'm going to go open this guy. So this is the Unicode characters for each of these navigations. For some reason they all appear in the question mark. I don't know why. So now if I send this key, you should see the focus move from here to here. And it didn't work. You guys know why it's a catch. I did it on purpose. You didn't listen to me, right? So I said it had to be sent to the parent element of the whole scene for it to work. So I said, I'm going to press enter and I will send the same key. And now it moved. And just prove we can do more. I'm just going to press enter. And it's weird because I'm copy pasting the same thing, but it's going to do something different. And now we're inside the poster. So another thing I wanted to show was, so you're probably wondering if you have any scripts on the other platforms if you didn't have the installer, this is what I'm going to show you. We have a special platform name for that. I'm going to launch an app here and I'm going to go back to that in desktop. Did I close it? Okay. And this one is called 4SX Connect. So if I list all it's going to do is it's not going to launch anything. It's just going to go and find this IP address and see if it can connect to the socket server on the port that we've established. So now it's connected to the app that I launched myself. And the cool thing about this is if let me go into an element here and I'll click poster. Basically the reason we started this was to be able to debug our automation layer. So what I can do now is I can actually put break points in my automation layer and once I tap on it it breaks from the code and then I can start going through it and debugging. One last thing I can show you. So originally what would happen with our source tree is we would publish everything from the scene tree and what happens is a lot of developers would load multiple screens in parallel so that the transition would be a lot faster. And what would happen is when you look at the source tree you would start seeing this. I'm trying to click on Aladdin and I have these posters that are above it blocking me because the set of coordinates are kind of colliding with each other. And that would also create problems where you might now have duplicate names of elements and stuff like that. So how we solve this is by filtering the source tree to only what is displayed. And so here I have these caps and now the new default state is that I would only show the source tree of the displayed element and if you want a full source tree you would toggle this button. If I start it again and I go to the same page you'll see that now we can actually click on the title of the app. So now I can select this. And this source tree becomes a lot simpler and easier for developers to write their test scripts. So this is pretty much a presentation. So the key takeaway from this is that Appian is easily extendable. There's a good community that can help you if you want to try something new or that's pretty much it. Thank you. Hello. I not really want to ask about the UI but regarding the Appian desktop is there also possible if we run the app from the Xcode and we directly attach the process to the Appian desktop without re-launching the app like you did Just like. What's the difference between what I did and what you're asking? You want me to launch the app through Appian instead of me manually launching it? So you're launching the app and not restart the app from the Appian desktop, right? Yes. So I use this platforming called connect to app and what that does is it launches our driver So it's the installation process and it just connects to the app. It does exactly the same thing it would do without the installation part. Yeah, that's what I mean. With our solution? No, I don't know. Okay, thank you. Is that a specific capabilities here? Really? Okay, great. Because sometimes I found it crest only we executed in the Appian desktop. Okay, great, thank you. It's really cool Simon, obviously I've seen a little bit of this before but I haven't seen all your demos and I liked them a lot. I guess I'm curious for implementing all of this stuff how much of the work would you say was on the engine side versus the Appian driver side in the Node.js world? Is it like 90, 10 or 50, 50 or what? Good question. Definitely more on the engine side because we started from scratch with no information there. 70, 30? Okay, cool. Yeah, I was curious. Hi, so my doubt is that the keystroke commands that you just sent right? So when you say you do other C platforms as well so what about the commands in a TV remote that you generally have like there are so many keystrokes right? So do you handle all of it or just the basic ones? So that's something to do with this but interestingly I've been talking to a few people and I think I have a few options to create that solution so I don't know if any of you were at the other TV app presentation HDDB or something but he was using keys from the web driver so that's something I'll definitely need to see if I can use the same thing and I think you can define any keys in there. If not the one who was presenting just before he did he added support for TVOS and for that he used another set of commands to be able to send the keys and he was able to send menu and other so if he can do it with TVOS definitely we can do it. Thank you. Hi my name is Rish. I would like to know how do you make sure different versions of Android and different versions of iOS and different other platforms supported well with this UI engine for example let's say we have Android KitKat version and it supports a certain set of APIs how does the engine make sure that it works well provided by the vendor? So when we build our apps we always try to build it for a few versions below so that it supports obviously the older version but when a new version comes out we go and test it out and see if there's something that changed like now what is it Apple just announced iPadOS so definitely that'll make a change and we'll have to build scripts and everything we'll have to adapt to it. And you said like it's going to be a one engine for all different types of platform right? It's going to be one engine that's going to work using the universal socket for all the different types of device right? So when let's say like iOS is coming with a new platform for all the others as well like how do you test like if something is changed in one platform you're going to make changes in your UI engine and make sure that... So in the UI engine we have all these hooks for the platform so we kind of do all the abstracting for you so the developer doesn't have to care about the platform but when we port our engine to a platform we do some work definitely to create all the hooks to that platform. One more question like what is the learning curve I need to have in order to use this? To use Appium or to use our version of Appium? To use UI engine. Well so it's all the same web that I recommend except we have some customers like for the key navigation and such but theoretically it should be very short learning curve that we actually have customers who knew Appium before started using this and they jumped right in. One more last question how can we extend in case like we want to support some custom APIs like what is the language we have to start with suppose if we face some issues let's say in APM we find issues like we would go and contribute in APM source code where we make changes so in this case when we face issues like do we have to work with UI engine team or and the custom API suppose if it is something that only we would like to have how does it go like it's like we make a request to you you support it or we implement ourselves something like a plugin or an add on I think it would be a discussion but usually the customers would ask us to do it for them. We had one customer actually make a contribution to our driver when he found a typo somewhere he just corrected it. Thank you. Any other questions? Hi I am Amrit so you mentioned about the desired capabilities to automate the TVs or any other things right. So you have any document to verify for what end all the desired capabilities I have to give for specific TV or any other solution like what I have mentioned I could see the Git I mean the Git code what is the desired capabilities it is throwing for not for I don't see any documentation for that I mean the usage basically the usage okay I missed the question okay so in the Appium GUI we went and you showed some desired capabilities what we have to mention if we are trying to automate TV maybe you told you have to give the IP of it. So maybe I can just show if I go to our GitHub repo yeah here so for I explained here for iOS this is the different capabilities you would define Android, macOS UI, Mac, UI TV so you are asking there are some that are not in there so for the ones that are not in there right now you would use this Connect2App because the installer is missing but you could still run on the platforms using this once we add the installers then it will have its own platforming okay this is really much the universal entry point to our automation here and then here just to continue then this is the list of commands that we have and the version of the engine we have started supporting it we have a list of attributes here we have some settings we actually I didn't demo this but we can do time dilation inside our app if we want to shrink the transition between the different screens I can actually go and say I want the transition to be five times faster and okay just finish so the selector strategy we support so we have ID, class name and accessibility ID and then if we do define a test ID in React Native it will actually override the ID so when the ID search strategies start searching it search for the test ID first if it can't find it then it search for ID so we can take other questions offline and demos are always fun they lead to so many questions thank you so much