 Hello everyone and welcome to this session of embedded limits conference. My name is Tim bird. I'm a principal software engineer at Sony electronics and I'm here with Harish Bonsall who's a technical engineer at time says, and we're going to be talking today about even more So this is an update of a talk that we've get into the past about rest API that we use for automated testing. And we'll be explaining some of the concepts of about that today so welcome welcome to our session. So this is just the abstract of the talk. I'm not going to read through all this but this is just something I like to tack on to the slides so that people coming in later can use as a reference a quick description of what we're going to be talking about. So this is the outline for the talk today so in case you haven't seen our previous talks on this topic. We're going to do a quick review of the rest API concepts. And in particular we're going to talk about the farm resource model, which is an important part of kind of the API is that we've developed for for automated testing of boards and a board farm. But the meat of the presentation is really going to be about the updates since last year that's the important part so what have we accomplished in the last year we'll be talking about some audio testing apis and giving some demos of those. And then also we'll be talking about web application testing and in particular the integration into Jenkins as a CI pipeline of the apis. And so how to run, how to use these apis from inside a Jenkins CI pipeline. And then we'll wrap up with a discussion of future directions. So we'll start with a problem statement. And this is the same problem statement we use to introduce the API concept a couple of years ago actually now. But basically there's many tests that are available for embedded Linux platforms, but there's no standardized way of running tests on physical devices. Right, so there's a lot of different test frameworks things like fuego or Jenkins or kernel CI. But and there are some board farm frameworks things like lab grid, but there's no standardized way to use different tests in, you know, among different board farms. And the reason for this is that every board farm, every farm implements their test infrastructure differently so each farm uses kind of its own set of hardware. And because there are no standards in this area, people cobble together their own scripts to do the to control the hardware in their lab. Things like the power control and the networking and, and, you know, the deployment to the devices and the end result of this is that tests that are written for one lab, because there's this interface to the control strips. And those tests, you won't work with other labs, because those other labs have different scripts and the end result is nobody can share tests which is a bad thing so we're in an open source ecosystem. We'd like to be able to, you know, build upon each other's work and collaborate. So our solution is to create a standard method to access hardware in a board farm to access the boards in a board farm. And this gives us a couple of nice benefits. One is that board farm technologies can evolve separately from the interface to the farm. All right, because we've got now got an abstraction layer there. And but the most important thing is that tests can be written that work in more than one lab, right, and test frameworks can work with more than one lab if they're integrated with that API. So those are all really nice benefits that we'd like to see. So if you look at in general at the problem space, you'll see that if you're doing more than just software testing if you're doing hardware and software integration testing, you need you off. I'll always have some kind of hardware that's off the device under test that. So just take a very, very simple test like a GPIO test right so on the device under test. You're going to toggle the GPIOs and those are going to turn on and off signals, but you need a device that's measuring that in terms of hardware. And so you need to actually be communicating with some other piece of hardware in your lab. And so you need to end points to control one on the device under test and one with another piece of hardware. And all of the rest of the things on this on this slide so audio and video playback tests if you're actually testing the hardware, not just the software like in loopback mode. You need to have a capture device that maybe is off of the board. Well not as maybe is really off the board. Same thing with power measurement if you're using an external power monitor. You need to, you know, apply your workload to the device under test but then capture the power measurement data from the external monitor so you need need to talk to a separate device the same thing with USB robustness. So, in all of those examples you need a second, you need to be communicating with hardware in the lab. It's not the same as the device under test and if you're going to do that in a lab independent way, then you need to have an abstraction API to do that which is exactly what we're proposing with our API. So the API actually has two high level concepts that I think are really important one is the API between the framework and the lab. So you have multiple test frameworks. And if you want a single test framework to be able to talk to boards in different labs then you need some kind of abstraction and to do that. So that's that's the first abstraction that we have as part of the API is how to communicate to boards in the lab no matter which lab they're in. And then the second abstraction is on the next slide is how to talk within an individual lab. How do you talk communicate with the resources the other pieces of hardware that are in the lab that you're going to be controlling so if you're doing power control of the board you're using the device that's that's not the device under test and and same thing with our measurement audio video capture networking GPIO storage all of that. So we have these API is that we can communicate use to communicate with those pieces of hardware, and we've tried to abstract the operation so that it doesn't matter which power control unit you're using in your lab. The test can use the same API is and that's what's going to allow us to reuse tests between labs. So the rest API itself, excuse me actually consists of kind of two parts. So it is a web based API. It's based on HTTPS and JSON consists of that. I guess web protocol, and then there's command line interface that goes along with it. So the rest API itself is is based on an extension to the lava rest API so lava is fairly industry standard tool or platform stands for Leonardo automation validation architecture or something like that. But it can be implemented because it's just HTTP calls and JSON can be implemented completely fully with just curl and jq so jq stands for JSON query. And so you can actually perform if you if you wanted to type quite a bit you could perform everything you needed to manually on the command line just with those with those two Linux command lines. So that provided a command line tool, actually a couple of different implementations of command line tool do the same operations as the rest API, but the command line tool makes it much more suitable for human use, and also for automated use so that makes it so that it is scriptable. And that's, we have multiple implementations of those. We have one from time says that is a shell based client and and the server is based on the lava servers server that's based on Django, and that's a in production now. And you can get the client for that at that get repo. There's a separate implementation called lab control which is a plain CGI server or plain CGI script on the server side and then the client is a Python client so the clients are available in multiple implementations. That is also source available now but I'll warn you that it's alpha level quality so don't don't get your hopes up too high. But if you want to play around with it that's available. So we had a bunch of stuff in 2021. A lot more implementations and I'm not going to go through this slide because you can go back and look at the this presentation that we had it embedded Linux conference last year. But the main purpose of this talk is to talk about what happened this year. So what has happened since our last talk. And so the new thing we're going to be talking about is the audio testing API. So we have APIs for audio capture. And then the other big thing is integration with the Jenkins pipeline. So there's also a minor feature. Well, times this already had a feature about web terminal, but that was added to lab control so there's a little bit more feature parity there but we're not going to actually talk about that today. So I want to talk a little bit just kind of introducing the audio API so we did a couple of things when we added the new audio API first is we had to add the audio resource type. We already had a command called get resource. And you can see that there's now an audio resource that's available in the lab. And, and that's the second bullet there is an example of the actual command you'd use to in a shell script to get that the audio resource for say HDMI to. And the other thing that we found is that it we added a new optional argument called the feature. And that is important to indicate the particular importer output item on the board for which the test is being run. So, we are testing in our labs we're testing a raspberry pi model three I think and it has multiple outputs right so we're doing a playback test is the kind of the nature of the test for running. So we have to indicate we could be capturing that audio from the raspberry pi has an HDMI channel it has a headphone jack and it. There are some products that that have multiple of those, or even output via speakers. And so you need some way as part of the API to distinguish which audio element that you're actually capturing that you're testing. So we had to add this feature argument to the get resource call in order to, you know, discover the appropriate resource in the lab for that particular audio element. And so the other thing. So basically, we already talked about this we previously supported resource types where the power measurement camera and serial, we've added the new resource type the audio. And what we did was we reused the same API that we had some a capture API that we introduced, and it consists of several different verbs and for other other systems like power measurement. We used four verbs we have start capture, stop capture, get data and delete. And those, those verbs were used for like power measurement data if we're talking to, you know, a device that measures power to retrieve the data but that was ASCII data. What we found in, in doing the audio testing is that, once we started working with raw audio data. It didn't really fit very well into the JSON model so our request response we send a JSON request get back a JSON response normally. We had to switch over from doing get data to retrieve a JSON response from the server to get ref, where we get an Earl path to the capture data on the server so becomes a two part operation to retrieve that audio data as part of the capture. But other than that, it's the API is basically the same. So we're, we feel like the resource model and in particular the capture API system that we have is lending itself to these different, these new resource types pretty well. So let's see next slide. Okay, so that's just the brief introduction. And now I want to get into our actual use case so it's a lab independent quality test of playback. And you can see these are the rest API mappings. The test itself proceeds in a couple of different phases. There's resource detection audio capture audio playback and cleanup. And you can see the command lines in the in the middle column there. These are the actual command lines that are used for those different operations things like getting the resource ID, and then starting stopping and getting a reference to the audio file that's created as part of the capture. And then during the playback itself we use upload to upload the the test file and SSH run to execute a playback command on the on the device. And then we during cleanup we're using the delete audio delete is for the captured playback data on the in the lab. And then we also have some local commands that we use to clean up data. And also shows you the column on the right shows you what the actual URLs are for the rest API for for those operations. So the high level concept here is from both fuego and from time sys, we can run tests on the Sony embedded board farm sitting right next to me here with the Raspberry Pi board and some audio cables hooked up. And the same thing is true of the time sys lab. And for this particular demo I'm going to be running the tests from fuego. And then we've got also Harish will show showing time sys remote running the test on on their lab as well. So, let's see. So here's the hardware configuration. So we've got in our lab in each of our labs. We've got a device under test which is a Raspberry Pi, and it is hooked up via the network to the lab server. And it also in each of our labs we have a lab resource there's some kind of microphone capability in each lab, or rather just an audio input device or recording device. And so we've got the audio input hooked up to the audio output and the lab knows about that connection right so we can query the lab for what lab device is going to be recording the audio. And we discover that at runtime for the test so that's part of that abstraction for the resource model. The actual test sequence itself will consist of. We're going to discover the lab resource by communicating with the lab. We're going to actually put an audio test file on the device under test, and then initiate capture and notice these are going the arrows are going to different devices kind of back and forth as we as we stage stuff for the test. And then we're going to, we initiate the capture on the audio device initiate the playback, then end the capture, and then the real work of the test I guess is comparing the played versus captured data and we'll explain the tools we used for that. And then the last couple steps are removing the data and we have to remove it a couple of places remove it locally on the lab resource on the device under test and then on the test host. So we're going to clean up all of the temp files and stuff. So if we look at what the API looks like in practice so we have an audio playback test as a shell script. And we're using a client, a variable to indicate the client because sometimes we're using EBS, or sometimes we're using LC. The first thing we do is get the resource by doing a get resource audio and then the device ID, which is the feature. And then we upload the audio file. This is just to give you a flavor of what it looks like it's a little bit ugly because this is shell scripting and so you see a lot of you know dollar sign variables in here. So you can see that we get a we get the resource from the first part and we use that resource later when we're doing the audio capture so we do token equals client resource audio start. We run a playback command on the actual device under test using that audio file that we uploaded. And then we stop the capture we we get the reference to the token we download that the captured audio. So we do data analysis in this case we're using also bat and socks. I'll talk more about those and how we use them later. And then we use some more API is for cleanup of stuff in the lab and locally. So here's a video. And we'll go ahead and start that this is showing the Jenkins interface inside Fuego. We have a couple of different boards in different labs. We're starting against a board in the Tim's Fuego lab. And we do the build which shows that we're starting instance 35 of this test. And we in Fuego we do a couple of start up operations to prepare the board. And notice that we have some output that showing that I've detected which resource is capturing the audio from the device in this case rear desk microphone jack or rear microphone jack on on a desktop machine. And the test proceeds very quickly. And it shows that I passed all the tests. I ran that was a live test well recorded. The same test exact same tests I had run on stuff in times this is lab. And so, just to show you the type of output we're getting from also bat also bat is a tonal analysis tool shows we passed there and then we also did a socks analysis of the data as well, and also showed that our frequency was about the same. Well, it was exactly the same actually this is times this has a really good audio connection between the board and their microphone in their lab. So that's, that's my test and I'll let Harish talk about his, his test that he that he ran in his lab, or from his host. So my name is Harish Bansal and I work for our time says and time says embedded board form group. So the test which Tim just showed I ran the same audio test on my computer local computer with no physical connection to time says board form. So I ran the same script. Let me show you a quick recorded video of this. This is the same desktop, which Tim use in Fego. You would see the similar logging statement as Tim short similar analysis. So it, it passed the peak detected a target frequency. These are the soft socks analysis data and the frequency is exactly that. So now I request Tim to receive. So what did we do in terms of the actual analysis of the data what was the, what was the physical test. So we used two different audio analysis programs we used also bat and sauce and socks and I'll talk about that so also bat is the name of a program that's part of also utilities. Project. And this is a test that can be used to do analysis of a frequency data in a test it's normally run in a loopback mode so we're taking a test that has been developed to test and also driver. And the also configuration on machine. And we've modified it just slightly so that instead of just testing in loopback mode. It actually can be used to test data that is captured on a different device. So we in order to do that we needed to add a new mode of operation and that was to analyze data from a pre captured file. And very minor modification actually we still need to send the patch for that upstream. We think that would be useful for other people using this tool. One of the things about this actually backup is that we did want to run also also bat on the device under test we're not we're not running it there. Because that's not where the captured data is we're capturing the data actually on a separate device. And notice that this is not required that we we need to be able to run also bat on the host where the test is executing. And we don't really want to run it on the target or the device under test. And the only capability that we need on the device under test is the ability to play the sound over you know the audio output device. And so we used a play from that which is also from us util's but the interesting thing is that could be anything that's available on the device under test so if you're testing like a product that has a particular playback command you would use that playback command as part of the test. And but the also bat itself which is the analysis tool does not ever run on the device under test. Okay and then the same thing is true for socks. So socks is a tool that allows you to perform a number of operations on audio stream. And one of the things that you can do is you use you can sequence a bunch of filters one of the filters is called a stat filter that just gives you data and about that stream. And so we're just in our case we're looking at a couple of pieces of data we decided to look at the frequency to see if there was any frequency drift. And in times this is lab we didn't detect any frequency drift in my lab, I don't know if I have a crummy connection, but I did have some frequency drift that and so I had to add a threshold to to decide if I wanted to ignore this. The audio sounded fine to me but in terms of total analysis that came back with a difference in the frequency and so it turns out that there's a couple of weird harmonics maybe I've got an ungrounded cave audio cable or something. Turn the time over to Harish to talk about the other major thing we did this year which was the the web app and the CI integration that we've been working on. So this year on top of both form infrastructure independent test one extension we are working on this how to use this API is to set up infrastructure independent continuous deployment and continuous test pipelines pipelines which can perform auto device provisioning and test execution. Let's see one such example for testing web UI of an application which is running on a board connected to port form and where Jenkins is used as a CI tool. This is the test environment setup. The Raspberry Pi 3 port is connected to zombie. Zombie is time says a lab controller hardware where devices or devices under test are physically connected. There could be multiple zombies and all zombies are controlled by EPS server software which runs on a centralized remote machine. There is a Jenkins setup which has both form command line tool EPS CLI tool installed to perform operations on the remote port. Applications under test is an edge book application written in Python it runs on port 1990 of the devices by three port. In times is both form devices or duties are connected to zombies private network and any application which is running on device port is not accessible outside of zombie network. So here we forwarded port 1990 web app port of the device to zombies 1994 so that from the Jenkins node this application could be accessed using zombies IP address and the forwarded 1994. This is the test pipeline here each block represent a pipeline step. It runs from left to right. The first step takes the application under test and publishes it on the Jenkins interface. Next device reservation steps it reserves a device connected to both form to this pipeline. App Deployer does the provisioning work it reboots the port verifies whether devices boot it up and then it downloads the build to be tested application build to be tested on the app publisher step and transfer it to the device. Test runner then starts the web application on the device runs the test groups and collects the test results and finally device really stops the web application on the device. It releases the device so that other users or pipeline can use it and then finally powers of the device. Now let's see a video of this pipeline in execution. It's a recorded video. So here we are on the Jenkins pipeline page. All the blocks you see are pipeline step and when this serenium test runner pipeline step would run you will see a window on the right which would show in browser test actions. So here we go. You could see the browser test actions. It put run three test cases adding a user editing the user detail and finally deleting the user. And on the left Jenkins page. These are the color of these blocks they represent their execution state view means that pipeline step is still to be executed. If it's yellow that means it's under the execution greens mean successfully executed and in case a pipeline step fails it would turn red and none of the remaining pipeline steps on the right. Now let's go and see the test results right from Jenkins. So this was our test suite test address book. And all the test run all three test first state is mapping with the pipeline step. In the first column we have pipeline steps middle column has corresponding EPF command line tool commands that were used in on that step. And in the last column we have corresponding rest apis. Device is a vision step. It uses a device info command to log device properties in the Jenkins job execution logs it uses allocate to reserve for form device to this pipeline. In case of a deployer power report is used to power cycle report SSH run is used here to clean up any. Test run artifacts which are lined from the previous unfinished test job and SSH upload is used for transferring the application on the test build to the device. The test runner may choose a SSH run command to start web application that test on the device and in device release step it's used to stop the application application and delete the application source from the device. So we are off to shut down the device and release to the allocator device so that some other user or pipeline could pick it up. The step prefix with shared orange as straight here are only four times as both form since it requires a forward forward additional forwarding step might not be required for other other systems. So here we confirm that Jenkins pipeline could be used for device provisioning and this execution with these pipeline could be extended by having another step at the start picture does the actual application build and then this whole extended pipeline, which does the application deploys that application on another board connected to the board form and does the test execution. It can be auto triggered as per the pre defined Jenkins scheduling policy which could be nightly weekly or on every every code check into application source. So certainly this pipeline works in times is lab with times is both form software since it has this additional forwarding step which might not be applicable to other labs. We are working on running this similar pipeline in Sony's lab with FIGO. Now I would request Tim to talk about the future roadmap items. We're showing you a couple of new features, but we're not done yet. So of course, we think this word farm stuff is a good idea and we want to continue to promote the use of this API and the implementations that we've developed. We have some more API ideas for things we're going to do in the future, particularly the one that's kind of on our shortlist is keyboard and mouse API. So there are a number of different ways of kind of generating keyboard events on a device under test and mouse events, things like VNC or you can use a little board called a teensy board for simulating USB keyboards. We'd like to write an abstraction over those so that we can do kind of tests that require keyboard mouse interaction. And if we've talked in the past about doing can bus and USB bus and other buses. And then another area that we really are interested in is the provisioning API is which is installation and board bring up. And, and so we're starting to have conversations with other groups about that. And at times this has a client wants to do this testing all from Windows and so we probably be looking at writing a Windows client for the CLI tools should be pretty easy because of the simplicity I mean if you can write it in a shell script and Linux you can should be able to write it as a Windows client. And then we also want to integrate with more existing test sweeps for instance LTP and case health test we want to add some hardware testing capabilities to those tests now they're all software tests now as far as we can tell. And then after that. Next slide. We want to be integrate with additional test frameworks right so we've got we've shown with fuego Jenkins kernel CI. We showed in the past a robot test work test framework we showed in the past, but we know there are more test frameworks out there. In particular we want to see how to integrate with additional board farm infrastructure. Things like lab grid and get lab that we haven't integrated with yet we want to. We also want to create test pipelines for other types of testing so we've a lot of we've got a lot of hardware testing but we want to apply these same principles to things like system testing. Desktop applications desktop user interface testing and now that we've got we've got some familiarity with Selenium and and as we start working on a mouse and a keyboard API we can start doing that. And then finally in the area this is kind of related to the provisioning want to look at different boot deployment mechanisms in these pipeline so whether board boots boots with you boot or fast boot, or uses NFS TFTP or an SD card or boots up via usb we want to be able to support all those different ways of getting the software under test under the onto the board and also the test materials onto the board. And then the final thing of course is you got to use the stuff in production testing in order to shake out the bugs and especially to refine the apis. So that's, that's what's next on our agenda and we'd love to talk to all of you about all of this so at this point, this is a recorded talk but me and Harish and I should both be available for comments for the remainder of this session. So thank you for your time and now we'll take some questions or comments. Yeah, I'm told to switch the HDMI. Here's the chat window. We don't have anything questions from chat any any questions on our on our board farm stuff from any, any in person attendees. Okay, the question was, does this test framework support support a setup that involves a low capability device. And yes, very specifically it does. So we one of the reasons that we is structured the way it is is that the we want to run as little as possible on the actual device under test. So if you have like, well, I have, I have, I have done this with a nut x board, not these exact same tests, but I have a board in my lab that runs nut x, which is a non Linux operating system. And the idea is that I don't have to run a bunch of Linux commands to be testing stuff on that board. It does have to have the capability to like, I am thinking on the fly here because I didn't do the audio test on that board, but it is a board that has audio output. So I would have to modify the command to actually start the audio. But other than that, the, a lot of the work is happening on other devices in the lab. So the lab, you know, my, my, in my lab, I have several audio capture devices. I'd have to have one hardwired to the output from, from that small board. But the, the idea, so that was the long way of saying yes. So it's, it is intended to, to support testing devices of different capabilities and even non Linux devices. So, yeah, follow up. Okay, let me, I'm not sure. So you're talking about, okay. So the question is if, if it can also communicate with a device that's a gateway to a, to a low capability device. And I believe so. I'd have to look and see what you have to have in your lab is you have to have, there's got to be something in the lab. There's got to be something in the lab that translates the abstractions that we're doing down to the lab, the hardware you have in the lab. So underneath, so, and timesys has different stuff in their lab than I have in my lab. So I've got a bunch of Sony debug boards, which do power control and power measurement. And it's completely different than what timesys has in their lab. In the, in your case, if you've got a board that's between, are you testing the gateway device or are you testing the low capability device? Okay. Okay, so testing, talking to each other. I think the answer is yes, but I'd have to see kind of, you know, what, what the test is you're running. But I have all kinds of devices that are non Linux and that are also acting as gateways. So like I have a Bay Libre ACME board, which is a power management device or power measurement device in my lab. And that, that works fine with this. I can use it as, I can actually use it as a target for tests, but I can also use it as, as a lab resource, which is one of my devices that I'm controlling to, to capture data from other, other machines. So, so I think the answer is yes, but I'd have to see more details to be sure. Yeah, that's, that's the part where you, you would have to integrate. So whichever you're using as your, oh, we're out of time. But I'll answer your question offline. So thank you everyone for, for coming.