 Hello, everyone. Welcome to APM conference 2021. And in this session, we'll be talking about building iOS device form from scratch with David Halkowski. And without further delay, I'll hand it over to you, David. All right, thank you for the introduction. As you just said, I will be talking about building an iOS device farm. And before I get into it, I just wanted to give some thanks to some of the people that made this possible, which is team mobile who originally let me make this an open source project. And some of the earlier incantations of this lambda test who is actually currently a sponsor of this project and is using it in their products and they will be releasing it soon or it's part of it right now I'm not exactly sure. And also, I want to thank Daniel Paulus for being very cooperative with sharing information and also thankful for his software that he's created that is actually being used by the software that I have made. And the first thing I would like to do actually is to, you know, show the demo of actually what has been created so you have an idea as I'm presenting the different portions of it of, you know, what exactly has been done. So without further ado, I will show the demo here. So hold on one moment. So what you see here is actually this on the upper right is a live feed of actually four phones that are connected to the system. And on the left here is like what you would see when you log into the system to be able to control some phones. So I'm just going to quickly go through here and you know select a couple phones, and you can select a phone you see a video feed of the phone. You can interact with it swiping back and forth. You can click elements, you know as if you're actually interacting with the real phone in person, and you can see it's a very quick, the video of the live phone is matching up almost exactly with what I'm doing on the phone. I want to demonstrate that you can see each different phone and interact with in the same way even though these are all different types of phones. And also I wanted to demonstrate that you can type into the phone using a keyboard. So you can see you can type smoothly and you know effectively do manual test remotely on the phone without having to actually have an in person. So that's about it for the demo. So I'm going to move on. First, a little bit about my background. I actually don't have any extensive background and working with Apple devices or Apple software. My only background in it was actually some number of years ago I was actually Apple phone tech support for Apple products which included like Apple PCs and phones at that time which is mainly PCs. You know, it is kind of interesting that I ended up working with Apple products but I do have an extensive background is developing full systems enterprise systems. I kind of specialize in enterprise content management for what I've done. I worked with a great number of different languages over the years, which has enabled made a lot easier to be able to develop something from scratch that does end up using a lot of these things and you know looking at reference code. And some of the project background as I mentioned, originally this was created. Well, originally some software was created for STF STF is an open source device farm for mainly Android devices. And when I was at T mobile they actually asked me, you know, could you make something that could you alter STF and make it also support iOS devices. So that actually ended up creating a open source project called STF iOS support, which did that. And that took around a year to do on and off and various incantations and a lot of this research that I'm going to be presenting about actually was a part of that. And what I found was that T mobile, I was somewhat restricted by they didn't want to build anything with scratch they wanted to modify the existing STF, which is mainly in no JS, and that ended up being somewhat problematic because it's it's somewhat legacy at this point. And so in the end, you know, I found that at T mobile, it was able to work somewhat, but not quite as much as I wanted so actually eventually, you know I was working through a consulting company at T mobile. And when I was able, when I left there, I decided to actually rewrite everything from scratch, fresh, without using STF to be able to solve a lot of the architectural problems and security issues. So that is what actually, you know, kind of triggered this whole thing is, I said, Hey, I actually want to continue working with this and actually make it even better. And, you know, that so the demo you saw is the result of that work. And I've actually sold the, the currently created product back to T mobile, which is interesting after leaving there. And there are two main clients, the one of them wishes to not be named. And the other one as I said is lambda test. And to start out with discussing like the different types of things that go into making a full device farm. You kind of think about the requirements of like what do you need for device what is a device farm. And generally what you think about is, you know, the video display you saw you want to be able to interact with it like touchscreen taps swipe so on and so forth entering text. You want to be able to install your apps and you want to be able to then run tests on those apps, or, you know, conversely run tests against your mobile websites. So, you know, for apps, that would be XC test, what drivers in Appium scripts for mobile apps would mainly be Selenium, but you can also use Appium as well to control your mobile apps. And then you also want to be able to monitor, you know, those tests to see what are the memory usage of those tests what are the CPU usage so on and so forth. And then there's a whole slew of other things that you might want, like rotating the device back and forth, making slower network, so you can see like what happens with your apps or your sites. When someone has a bad network connection, what the, you know, battery usage is of like how much drain your apps are causing on the device. You may want to be able to like record video of like certain test runs or failures so that people can see like when an automated test fails what's going on. And I'm not going to go through all these because there's a lot of them. And I want to move on with discussing some of the creation of these features. So, in those idea of the requirements, the first question is what already exists, and what can be built on in order to create those features. And to interact with Apple devices that's that's the main thing is like how do you actually automate Apple devices at the low level. And the main thing that's been created is a thing called a live by mobile device which is an open source implementation of Apple software that interacts from the PC to the phones. These are the main features. It's quite capable but some things it does not have and it's also quite complex to understand how it works and how to integrate into your projects. It is actually a re-implementation of what's called the mobile device framework which is a private Apple framework that was made by Apple and they, reverse engineer and lower like just dump the binaries of it and see what different calls it has and that's generally what all these different things are doing is they're they're re-implementing the calls and mobile device framework already has. There's also WebDriver agent which is a, it's an XC test that runs on the phone, but it's long running and provides an API that you can make calls into it so instead of like making an XC test with manual actions, it basically just waits for you to send it commands and then executes those XC test commands. And then there's another thing called iOS deploy. iOS deploy is an open source software that actually makes calls to the private mobile device framework to mainly install apps on phone. One of the new things that we're creating the process of doing all this is go-iOS which is by Daniel Paul as he created it and I believe has discussed some of that. And ios iof which is my own implementation of various calls on top of the mobile device framework, which only runs on macOS. So in the first consideration is like how do you actually detect devices that are plugged into into a machine in order to activate them and show them on a farm. So like using launch agent it's an Apple feature to like activate software it's sort of like system D but for macOS. So it activates different services. And there is some limited features to activate some software when you plug in a USB device, but it doesn't work as you'd expect. It's not a venture for like when you plug in a device and then remove it. It's basically when a device is present it repeatedly tries to do something. So it doesn't really work for the purposes of this. Another option would be like polling so you could just you know run any software that gets a list of connected Apple devices and then use that to repeatedly pull it and then like watch when one appears or disappears. That's obviously not a very good solution. The next option that I tried was using a USB device driver. And this is actually works decently well. Because you just take like the basic stock macOS USB device driver code, and you make it not do anything except for sending notice to some software so basically like it just pretends to be a driver for the device. So that you can get the notice of knowing when the device is plugged or unplugged. And this was actually being used in some of the earlier incantations of this, then there's the you know the Apple mobile device framework, which is what I was using for a while. And it was one of my own implementation iOS if that uses that to get notices from the official, you know, private framework by Apple. Then live on mobile device also has the ability to do this. You can like look at their implementations of listening devices and you can sort of make an adventure of everything I haven't actually done this, because it's inconvenient to do as also go to us by data policy added a feature. That matches my iOS iPhone implementation which is basically just, you know, you run the command and it just says when the device has been connected or disconnected. The next thing to consider is like how do you actually so display of the video on a phone on a computer and you know through the web. And one method would be screenshots. Unfortunately, the way screenshots work through the Apple tooling. Actually, I'll get into that next slide and this is just the overview. So there's screenshots. You know you can use like an HDMI dongle so like Apple has their own dongle you can plug in that has an HTML and then you use like a HDMI capture device after that. And then like DGG these kinds of things in order to get video. There's AV foundation, which is Apple's framework for you know streaming AV stuff, and it has the ability to get at that video of a plugged into device. And FFM Pag has an implementation that uses AV foundation that you can use. And once again, Daniel Paul sorts some software that also interacts with this AV foundation method. There's a replay kit, which is a, it's called an extension on iOS. And you can use replay kit to then stream video through like RTMP or JPEGs or something else. And then there's also WebDriver agent so XC tests that run on the phone. There are private XC test calls that let you get take an image from the full screen. So WebDriver agent, which was originally created by Facebook but is now in by Appium and they're the maintainers of it has an ability to get a streaming MPM JPEG of and basically what it does is it repeatedly takes screenshots, but it does it in a way different from the official Apple ways. So it's slightly better in some ways. And I'll discuss a few of these in a little bit more detail, which is like the screenshots the problem with those is that they're their PNG and their full resolution. So it can be multiple megabytes in size just for a single image, and the speed at which you can actually get them is only around two frames per second at the maximum, which is not very good for, you know, interactivity or the way the way looks the way it feels. And unfortunately Apple does not provide any way, you know, through their, their basically API through a USB connected device to get a lower resolution or to get JPEGs to the PNGs. And as I said, like there's the HDMI dongle method and picture here are some devices, like this one in the top middle is actually a full computer that actually does not work in the same way as an official Apple dongle. It uses a different streaming method but it's just interesting to see that, like, there's different ways to do it. The dongles on the top right is actually just like a splutter so that you could power it as well as use another dongle behind that. The one on the top left has like network as well as HDMI, as well as USB. And then if you did this, the problem with this is if you get a stream of video like it's encoded in HD64 from a capture device, you would then need to reencode it into JPEGs. So like in the bottom left is like a Jetson Nano which is capable of simultaneously decoding like for HD64 streams and then reencoding into JPEGs that can be used. Or you could use something that's like a Pi array, which is basically what you see in the bottom right. The other method is would be using like AV foundation. And basically, so if you open QuickTime on a macOS computer, you can actually get a stream of the USB connected iPhone and see it by selecting the device from there. It actually shows up. So that's why here it shows as AV foundation slash QuickTime. And that's also why the software that Daniel Paul has made is called QuickTime Video Hack because he basically reimplemented what QuickTime is doing in order to activate the video. And AV foundation can also do that by itself without QuickTime. So it's sort of disingenuous because like it's actually AV foundation QuickTime just happens to use that. So Apple actually sort of documented how this works by in one of their conventions they said, oh, you send this basically like act secret activation command that's not document anywhere else except for the conference. And then it like activates the video, and myself as well as Daniel have both reverse engineers and see like what does that command do behind the scenes and it sends like a special control packet. The problems with this is, is Apple just sort of threw it together they actually use it for their presentations of devices like when you see them presenting at conferences about a device, they're using this feature. And you can tell that because when you use this feature, the stream from the phone is fake and they fake out the top bar. They show full battery charge no matter what the actual charges of the device. Because you know they don't want to show a conference with you know oh it doesn't have full battery charge. So if you use this method you can't tell the, you know, in the bar the actual charge of the device. It also shows the specific time because Apple always has this weird thing where they like any presented device has a specific time of release or some nonsense, like, so it does some strange things but the way this actually works is it. The phone encodes in an optimized way to H.264 and then sends that raw H.264 they're called H.264 Nalus the Nalus are each name was like a chunk of video data. So you can then process those that H.264 stream and turn it into JPEGs or something else. And that's one of the issues with this is that it's not scalable, because if you have like 10 devices connected to one machine that's providing this devices to a farm. Then you would have to decode 10 simultaneous H.264 streams and turn them into JPEGs, or potentially you could just stream the entire video stream out to the user. But then they would need to be able to decode those H.264 streams in their browser which some users may or may not be able to do. The other method is a replay kit. So Apple provides this extension, you know as I said that lets you get at the, let's say write an extension to the phone that can run across the whole system and then it basically receives the video data and you can do whatever you want with it. The issues with this is that their examples are very bare bones and don't really do very much and the open source examples of this that there are open source examples, but a lot of them don't work correctly if you try to use them they're very flaky, or they're very hard to integrate. And the issue with making your own version of this is that there's a very strict 40 megabyte memory usage for in this extension. So if you ever go over that 40 megabyte amount it just arbitrarily kills your extension and it's very hard to debug that. So you have to be very careful with how much memory use when you're doing this. And also, like, you would expect that you could use like system logging to diagnose you know if you just add log statements in the middle of your code to see like what's going on. Nope, that doesn't work. It sometimes it works, but most of the time it does not just throws your logs on the trash because it basically treats the extension is like privileged code in the system and it doesn't do very much. And you can sometimes connect Xcode to it to like breakpoint in the middle but the problem is it's constantly receiving frames from the system. And if you add a breakpoint in like are pausing and trying to see what's going on it's it starts freaking out and it basically crashes. So it's very hard to debug this kind of stuff. The examples of this that exist that I initially tried we're using RTMP video streaming, which actually works terribly. I don't I mean, in some instances maybe RTMP can be great, but I can tell you that quickly streaming things with a 40 megabyte usage in an optimal way and setting up the whole procedural stuff just displayed in the browser is very complex. In the end I just chucked it and I said, No, this is not worth it. It's too hard to do the latency was bad. It's much, much, much faster much. The latency is much lower to just encode to JPEGs. So this is actually what I'm doing in the current implementation is I received the video data and I quickly encoded using hardware compression on the phones and then I send those JPEGs out. So the machine like Mac OS machine or Linux machine now that the phone is connected to does not have to handle any encoding or decoding. And that lets it be far more scalable so you can connect many devices without having to worry about, you know, your host machine getting bogged down. The next thing to consider is like, you know, after you have videos working is how do you actually emulate touches on the devices. So, you know, that's taps, long taps. It was just the last things you could see here. And so there's different options here and I'll discuss some of these a little bit. So, for tap, you can use XC test options actions do allow you to do different things and the web drive vision is sits on top of that layer. As I said, as an XC test long running XC test, you can use that and it provides an option called tap. You can use tap to to execute the tap but unfortunately it's very, very slow. The performance is terrible. You don't want to do this because there's like a second lag from when you actually execute this call to it actually occurring. Then there's also a thing called touch perform which lets you perform a sequence of things like click here move your finger to here then move it to somewhere else. And that works much better and it doesn't have that second delay but it still does have like about a quarter second to half a second delay. Because of the way it works. So ultimately under the hood, it calls an XC test a private function called synthesize event. And that ends up being the best way to do this and it's very performant and so I'm actually using a modified version of what drive agent right now that strips out other things the web drive agent does that may slow it down and add more lag. And it's just directly call synthesize event to make the tabs. Another thing that I researched in the process of this was is like, can you run VNC on a phone is there anything that does that any app any method. You know I scoured open source looking for things to do it. There is something but it only runs on jail broken devices. So there's a software called VNC that can be run on vent jail broken devices which works great. And then it shows video and it lets you, you know control it very efficiently. But since it's only for jail broken devices, most companies don't want to use that they don't want to touch it and it also doesn't run on the latest devices or the latest iOS versions. So as I mentioned that in the process of this, it's taken several years actually to find out what are the most performant ways to do this. How do you minimize the latency so that when you're interacting with it it seems as smooth as possible. And that ends up somewhat complicated just because there's there's so many small details that can affect the performance. And you know so besides just directly calling the event, even just things like the number of I'll get into that in the later slide. Like the way the the requests are being executed so like, I'm actually using a nano message partly to accelerate some of this, because if you use HTTP requests even HTTP request one off add extra overhead compared to just sending a message across So, you know when you when you start dealing with the latency of like making it seem as if you have the device in person. It gets very finicky with those smaller details. The next thing would be like a text entry so how do you do that. It also provides a thing. It's like you can call the keys command to enter some text. But unfortunately, if you, if you're typing on a keyboard, and you actually you send those requests to WebDriven or HTTP. It will scramble and mix together your text that you've typed, and you end up with text that's in the wrong order. So you can sort of work around this by doing it slowly and like watching for the responses but it's very finicky it doesn't work. So this is initially and I've actually abandoned it because it's just a bad solution. So, and also you can't execute like certain control can't can't it's like you can use you can sound like an enter key, but like backspace or delete, don't work properly or like home or and stuff like that is not possible. So there's another call that was recently added. This is within like six months ago they added this, which is another private call within actually test which is I wait ID, which is basically like USB devices like keyboards are human interface devices was your mouse is. And there's a call that can actually initiate keys. This call, interestingly enough, doesn't allow you to make it doesn't. So normally when you enter a capital key, you hold shift. And that's actually two calls to the HIV interface, and this exposed API call doesn't appear to have the ability to combine that to like do that, basically do the shift commands so anything that can be done through this is now being done by this and it's much faster than the keys command, but capital letters actually have to go through the other slower commands so I'm actually using a mixture of these two now. And one thing you can encounter in this and I notice is it's easy to get bit by this so if you're using my driver isn't make sure to send the HTTP headers to actually say that you want to reuse the connection. Because web drive region is not using HTTP to which would automatically do that using HTTP one and if you don't do that and you execute many many commands so like if you type like 100 keys just typing normally on the device. You end up using up all your file descriptors on your system very quickly and then you'll you'll have problems. So I encountered this just a heads up for anyone using web drivers and we can call us to it. Make sure to do that. Another option would be to like use a Bluetooth keyboard some people some companies are doing this. They emulate a Bluetooth device from like, you know, a dongle connected to machine. It tends to be a Bluetooth keyboard and then they sending keys that way that does work great. That's one option it's slightly more complex I'm not doing this because it's harder to do. At some point I may switch to this and also the problem with this is not scalable in a data center if you have many many devices, you're going to have issues. You could also use like a USB keyboard so you could like using our Raspberry Pi or something it's someone better device to pretend to be a USB keyboard and then like plug it in through a dongle. So there's advantages to this because you can like get to the task switcher. So like on my phone to be able to view like the running tasks and then cancel out what cancel one out. There's a keystroke on a USB keyboard that you can actually activate that there's no other easy way to get to it like on the newer devices you can swipe up from from the bottom and like an angle to get to it. But on older devices, there's no direct way to do it through a remote control device and a lot of the device farms even the commercial paid ones have this problem where they're like you have to use the accessibility icon to get to the task switcher. It's painful. So this is one advantage of that and this would actually be beneficial and I'll be supporting that in the future. The next thing you want to be able to do is like install apps. The Bible device provides ways to do this. The previously mentioned I was deployed I was I have the thing I wrote also can install apps by using the mobile device framework. The implementation by Daniel pause also provides ability to do this. One thing to keep in mind is just that any app that you install needs to be signed for that device and used to use provisioning profiles. So ultimately what I will be doing is adding support to, you know, automatically do the resigning of apps, so that your build system can create an unsigned app and then send it over to the device farm and then automatically be resigned with the credentials employed to whatever devices The other thing that I mentioned that you need to be able to do is run run tests. This was initially for, you know, the first year of this project. There was no way to do this. I mean, you could do it through Xcode, but it was very slow and it was not scalable. So, because you get one Xcode is very heavyweight on the machine and you can only run one. You only want one simultaneous XC tests on a phone at once when you're using Xcode. And there's also various other issues like you can't do it on Linux because Xcode doesn't run on this. So, and you need this to be able to run WebDriver agent because WebDriver agent as I said is an XC test. So, what is now being done. So like now I'm actually using Go-to-s iOS in order to do this. And it's doing it by, it's a reverse engineering implementation of what Xcode does under the hood in order to start tests. And also simultaneously when that was released, TI device, which is by Alibaba, they also released an implementation that does this. So before Daniel Paul or Bob have released their implementations, there was no open source code to do this at all. Like if you wanted to do this, you had to reverse engineering yourself and no one wanted to share it. And I feel, I feel like the reason why is because everyone's worried that like Apple would attack them for reverse engineering and their stuff. But it's necessary in order to make a device farm. If you don't reverse engineering things, you will not be able to do what is necessary. And like everything like WebDriver is reverse engineered, Go-to-s reverse engineered, the stuff I've written is reverse engineered. And Apple just is mum on the word on this. They don't say anything about it. I've tried to like ask them. I've talked to like a few people. I won't name names at Apple. And they're just basically like, yeah, we know people are doing this. We don't care. So it's a rather strange situation. The other thing you wanted to be able to do is like monitor apps and tests for like I said, CPU and memory usage. The Apple thing that does this is instruments as part of Xcode. You can, so the only thing that actually does this is open source right now is the thing that I wrote iOS, so I looked at what commands are available in the underlying private framework. And I figured out how this works and I made a call that does this, but it only works on Apple because iOS only runs on Apple so far. It can be ported though to Go-iOS and that's going to be in the near future. I'm working with Daniel Paul on that. So I mentioned the proxy here. It's part of a Go-iOS that lets you actually dump anything that Xcode does. So you basically start the proxy and then you run some commands on your device. And then you can then see like what Xcode actually did and then you could potentially like extend Go-iOS yourself and then run those commands. So it's very complex. I'm actually, I mentioned like they use a special serialized RPC methodology that was very painful to write. I wish they documented this. All this stuff is not documented by Apple, but you know it's necessary if you want to be able to get the sort of statistics that people want for their tests. And that's pretty much the wrap up of all the different things that have gone into creating it so far. The lessons to be learned from this is to be persistent. And I say this because a lot of people along the way have told me it can't be done. There's no way to do that. It doesn't exist. Apple won't ever allow that. You would have to reverse engineer that and then you'll get in trouble. Like I've been told by companies like people don't want to hire me directly on this because they're like, we don't want to be responsible if Apple gets angry at it that, you know, they're going to come after our big company for creating this stuff. It's pure silliness. And also to try everything. The number of things that I've tried in the process of creating the device farm is absurd. Like, I'd say 75% of the work I've abandoned is not used because it was just things that I tried to try and see like, does this work? Is there any way to make this? Is this better than the other options? Just because you have to go through every single thing to find out the best way to do it. Like, you can't give up. You have to like keep trying, trying, trying, trying. And if you do, you can eventually make it work. And also what I found interesting is that GitHub search has become very useful for me because if you find like specific keywords in code that you know may relate to what you're doing, you can then search for all implementations. And I've spent hundreds of hours scouring all available implementations that interact with phones because it's so badly documented. And the only way is to like find any code that does it and then take that and then learn from it and then create other implementations of it. As I mentioned, reverse engineering is necessary for everything, Apple. Apple really needs to get on top of like actually embracing the community and going back to their hacker roots. They really don't care about the community of developers. It's kind of disgusting. And that's one of the reasons I stuck with this project because I want to work with the community. I want to cooperate with everybody. I'm tired of like, all of the commercial implementations of device farms for iOS saying we want to charge you ridiculous amounts of money. Everything I made everything you see here is completely open source it's completely free. I'm not trying to rip anybody I want to make this stuff work because we need this sort of tooling in order to better automate Apple devices. So that's it. And I wanted to give the rest of the time over to Q&A. And you can see these slides at control4.com is the domain for this. There's very little information there but I did put a version of the slides there. I'm going to need to update them also. It's a slightly older version of the slides. I'm going to add more links to different references. So if you want to learn more about stuff but yeah. All right. Thank you so much, David. So we have a few questions over here. The first one is, can we access remote devices from SDF if SDF is deployed on the cloud? Yeah, sure. So SDF, so the implementation that I did for SDF was the first round of this. It's been a year since I've really touched that implementation. And SDF itself has essentially been deprecated. The community surrounding SDF has forked the project to a thing called Device Farmer, which I've actually one of the founding members of. And I eventually will be contributing back the changes that I made. Like in the process of creating control floor, it's an entirely separate implementation. That's nothing to do with SDF anymore. It's completely rewritten. So eventually I will provide the ability to then provide devices from control4 into SDF so that people can like merge them that way. I'm not really heavily interested in doing that because it's, I spent a good year working with SDF and I don't like the code base. It's very messy. So that will happen at some point, but I really need like funding of like someone to say, hey, we'll pay you money to make this happen because I know how to do it. But I'm just not going to spend any my effort at this point unless someone pays me for it. Okay. Yeah. This is actually series of question. How stable is how stable it current solution for iOS device farm. Can it be used by private companies and how much time do you think it'll take until is also support, it also supports Android. Is it possible to integrate with APM. It has a lock mechanism mechanism which when a device is in use to prevent others to use it. Thanks. There's a couple different questions there. Just the last one first, which is, does it have a reservation mechanism so that one person doesn't interfere with another using device and yes it does. So I didn't demo that but when you select a device to view it it actually shows that it's in use by another person and then you can then kick that person off if you choose to do so and there's an options for that. As far as it being stable or not right now. Yes, it is stable. This is currently being deployed by some top companies on my main client which I said wishes to be unnamed they are actually looking to deploy it to more broadly within like the next two weeks. And as well a T mobile is going to be using the solution you know they they purchase the software for me, and they're using it now as they're going to be using it very shortly, I guess they need to contact me and work it out but yeah. Lambda test is also integrating this currently so they are using this behind the scenes to provide their offering for iOS devices for running to control them so like of those solutions. So this is really one of the best ones will be lambit test because they're going to they have their own employees that will be adding additional features and you know monitoring stability and making sure it works as well. So this is like a one person project like I am the company I'm the person who created all this. So I work with Daniel Paul us somewhat, but I'm only loosely sponsoring his projects, like he's I wouldn't call it to work because I'm, I'm only slightly sponsoring this project is not very much money. So, you, despite that this project has been multiple years it's been several years creating this so it is very everything has been done to make it as stable as possible. And it does also run on both Mac OS and Linux at this time. I'm currently working through some issues on the next version, the Mac OS version is the most stable at this point. I've seen some issues with like go line 1.17 but if you use go line 1.16 you try to Mac OS it's very easy to set up. If you're regionally technical you can probably get this set up within an hour and have it running on your device is it's meant to be as straightforward as possible and not confusing. I think I missed one of the questions in there. What about the, is it possible to integrate it with APM. Oh yeah, yes, integration to Appian. So, I sort of mentioned this in the middle of the talk which is that with 2x code you can't run multiple xc tests at once you can through go to iOS and through ti device. And this is important because like I'm using a modified version of web drive region, which is the underlying support for Appian, like Appian needs that in order to be able to work. So you can simultaneously run support for connecting to the device farm and for running web drive region and then running Appian on top of that. And I haven't added the support for I will be doing in the very near future of like making it automatically start up both web drive region and Appian for you. And what I'm going to be doing is actually making it so the way the security model works for the system, unlike, you know, the modification for STF is that all the commands you access go to the server portion and then the provider, which then turn connected to the device. So you don't interact with that provider directly so like, essentially the endpoint for Appian will be the server portion and then automatically forward all those what driver calls over to the device so that then that web drive is actually authenticated. And Appian doesn't actually work this way right now it doesn't have any authentication. So like it expects that it's just using an open web drive region and that's going to be slightly different so I'm going to have to tweak that slightly for Appian but right now you can just run your own web drive devices that are connected to the farm and that will work. But I'm going to integrate that a little bit better in the near future. Right, okay. So we have one last question. Is it possible to build smart TV farm, any suggestion on how to start. Yes, so Lambda test has actually asked me to add a support for the Apple TV device. So these devices right now. It should be doable. I don't yet know all the details of this because it's mainly I don't know if it supports like upload broadcast extension like the iPhones does, which is the current way I'm showing video live from the device. If it does then great then that'll work perfectly if not I'll have to use some of the other video mechanisms that I mentioned. So I have support running web drive region web drive region already has support for it. So it essentially should work and you know, like I said I'm working with Lambda test and they are paying me to add support for it, or they will be. And so probably you'll see results for this working maybe within a couple months from now. So I have a lot of other things to do as I mentioned like the whole slew of things that requirements, mainly only the basic bare bones MVP stuff is working so far, but it's, it's stable now and that's what I've been working through for the past four months is making everything as stable as possible so now that stability has been reached, I will be adding a lot of the additional features and like I said is also as well making the TV device work. So I think that brings us to the end of the session and David. Thank you so much David for sharing your experience today. Thank you for joining my talk as well.