 If package management is being used in the target image at all, because that's optional, and then we check if smart is installed in the image. So having checked those, we're not going to skip, and so we've got this function, I'll come back to that. We've got some basic tests here. We just checked that we can run smart-wise-help, a few other options, query a particular package is installed, et cetera. Then we get to something a bit more interesting. So in this test, we set up an HTTP server running on the test machine, which has got our deploy directory, which contains all of the packages we built during the build of the image and any additional packages you've asked to be built. And then we'll tell smart to add a channel that points to that HTTP server that we've set up and do an update to fetch down the packages, or to fetch down the list of packages, some more channel tests, and install a package from that feed, download a package, and install from the file, et cetera. So we're able to test all of the basic functionality of package management just through this. So yeah, as you can see, the tests are not really that complicated. So we hope that that means that they'll be easy to write. Sorry, one second here. Okay, so in addition to the automated testing, we've got a couple of other features that are worth pointing out related to automated testing. So we have p-test. Now, this was introduced in the previous release and extended in this current 1.5 release. It was developed primarily by Bjorn Stindberg and team at Inea in Sweden. And basically, it runs tests that have supplied with an upstream piece of software. So pretty much any piece of open source software you get, well, you'd hope most of them come with a test suite out of the box. So this will provide you with a clean way to run those tests on the target and get back your results. So we provide for getting all of those tests, installing them onto the target as a group and running them using a run p-test script. So we take the output from that and we try and coalesce it into a standard form that reporting tools can understand rather than having to understand all these different, various different testing systems outputs. So we haven't got too many enabled at the moment. We've got about 13 recipes that have this in angle of things like D-Vas and G-Lib and stuff like that. And we're hoping to add more in the future. So the next one, what's worth highlighting is actually our autobilder. So not only do we provide, do we actually use the autobilder ourselves to run our builds and do our releases and our regular continuous integration. You can actually download this yourself. It comes out of the box ready to run builds, run the tests, and you can use it for your own purposes. And it's based on build bot, so it's pretty straightforward sort of standard piece of software and you can customize as you need it. Other continuous integration, things like Jenkins and stuff like that are being used in the community. So you can use those, but this one is something that we provide and we use regularly, so it's available if you need it. We're in the process of kind of extending the documentation for this, so it's a little bit easier to use. So there is some documentation there now that you can have a look at. So I'd say based on mailing this traffic, I think we're starting now to see other people making use of this, not just us, and that's really good to see. So yeah, thanks to Beth Flanagan who's working on that within Intel Yachto team. And yeah, if you're working on the core build tool, it's a bit bake. It's probably worth knowing about this tool as well as bit bake self-test. If you're working on the Fetcher code or any other piece of bit bake itself, there's a little bit bake self-test utility. There's a few tests in there to do with fetching, parsing, the data store. So it's maybe not something that as someone building OSE would use very often. Probably you tend not to do work on bit bake itself, but if you are, it's useful. And we run it basically every build we do. We do a run through that just to make sure that nothing is regressed there. And finally, we have a couple of other tools we've got in our arsenal. There's test reexact, so that goes through and basically bit bake is a task-based build system. So we store when a task runs and so we don't need to run it the next time, but it's possible that if some of the inputs have changed to a later task, the task that normally runs immediately before it wouldn't be run before running at the second time. So for example, you might need to rerun the install step without having immediately pre-it prior to that running the compile step. So in that case, if the install step, for example, did a move of a file to another location and that file didn't exist the second time it came around, then it's going to fail. So in order to pick up those kinds of issues that you won't necessarily hit on the first build, but could come back to you if you make a change for the second build, we've written the script. And again, it's probably not something that there's someone who's building R&O, so you're going to use too much yourself, but if you're doing deep customization and you're building a number of your own, running number of your own recipes that you're including, then you should probably be aware of those kinds of problems and if you need to, you can run the script and pick those up. Along similar lines, there's a test dependencies script, which has been introduced in 1.5 and that's contributed by Martin Janser, and it's been really useful to find issues to do with auto-detected dependencies, so it's quite common in particularly in auto-tools-based bits of software that they will go off and look around the system to see if a particular library exists, and if that library exists they will enable some bit of functionality in the piece of software that they're building that will use that library, so that's great obviously if you're just a developer building that piece of software on your machine and that's the only place you're going to use it, but in a build system like ours where we want reproducible and kind of consistent builds, we don't really want that auto-detection to happen or at least we want it to be under our control, so if you kind of don't pay any attention to this then you can find that you build your recipe once and then you can use those packages, the packages that come out of it, and then you build it again and because you're now building it after some other piece of software is built then the output packages are not quite the same. So what test dependencies does is it runs through a number of build cycles, takes quite a while, and kind of highlights any of those kind of floating dependencies that have just crept in, so and we've, and now we've had the script, Martin has run that over, our wide range of the recipes that we have and some of the ones in the other layers as well, and we've eliminated a huge number of those dependency issues that could, you know, quite often in the past you would come up with some odd error because in the middle of building one recipe, another recipe which thought it should depend on that has suddenly found that the library's no longer there, so we eliminate failures like that using this script. So, and this, and we would run these two scripts I think we probably should be running them for each release and I think we will in the future, so. So yeah, now, so that's stuff that we have already done, so I thought I might tell you about some of the plans we've got for the future. So, so to do with the runtime tests, obviously we can run those tests on a QMU image and the QMU on a host machine, but really for testing people's images they'd want to run it on their own hardware and do regression testing on that as well. So, there are a number of complexities here. A usual approach to things is try and keep it simple, so we want to be able to kind of scale from someone who's got a single board or maybe a couple of boards connected to their build machine and they just want to run the tests on those to get started right up to people who have a rack full of boards of different varieties and, you know, they want to be able to have multiple order builders connecting to these machines and running tests. We're still not quite sure how this is going to work, it's we still kind of having some discussions. Really, one of the reasons I wanted to get up here and talk to you today was to get some feedback, really. I'm sure all of you out there who are building embedded products have got your own testing systems, your own scripts that you've written. You know, we really want to make sure that if we're building something that it works for you. So, it'd be interesting to hear how things work for you and how you would like it to work in a system that we were building for you. So, the other thing we want to do is integrate p-test, running these upstream test suites within the automated runtime test framework as well. So, really the challenge there is that a lot of these tests that come with upstream bits of software, you're pretty much always going to get some failures out of the box because, you know, there'll be some optional bit of functionality that we're not using or it doesn't quite work on an embedded platform or whatever. So, we want to be able to ignore those failures, but at the same time we don't want to ignore all failures in case that we regress on some particular test case. So, we're going to need to have some filtering of the results and make sure that we're able to tell the difference between a failure that we know about and a failure that we don't know about. So, we're still kind of working out how that will work and I think it won't be too long before we have a proper solution for that. So, there's a number of runtime tests that we could add to the system. We want to be able to test a broad range of bits of software, the recipes that we have in the system. Particularly, I think we will look to test things like we would want to be running the piglet tests to test our GL drivers, making sure that all of the open GL functions work properly. And we also want to run an actual GUI test. So, instead of testing just a simple command or whatever, we want to be able to automate some piece of X software or maybe something that's written in QT and make sure that that's functioning properly. So, we get a broader coverage of the bits of software that we're providing. And the other major area which we want to cover as well is non-runtime tests. So, this is testing the build system itself. So, and this would enable us to automate a lot more of the runtime of the manual tests that we're doing at the moment. So, doing things like changing inputs to the build system, adding appends, changing recipes, changing configuration variables and making sure that the corresponding change occurs in the output. So, we're going to call this OE, this script OE self-test. Again, it will be a simple thing that the order builder can run and without having to know the details and we'll get the report out at the end to make sure that everything is correct. And there's actually a proof of concept from one of the QA guys out in the mailing list for that. But this has a sort of a much wider scope of the kinds of tests that we want to run than the runtime tests. So, that's why we've kind of left it till the next release. So, we'll expect to have something out there in 1.6 for that. So, yeah, that will be testing things like the BitBake layers tool, testing installation of SDKs and making sure that the compiler will provide an SDK works and that sort of thing. So, yeah, just finally, I thought it was worth mentioning, at least anecdotally when we've been doing this, there's kind of a social aspect to the whole thing as well. So, relationship between QA and development team can sometimes be a little bit strange. You know, it's sometimes it's a little bit us versus them. And, you know, that they might say, oh, you make it and I break it sort of thing. So, I think we've seen a lot more communication between our QA team and our development team through them being able to work closely together and being able to cooperate on a single project. And we certainly expect that to continue with our future work. So, it's not just us supplying them with something and them telling us about it not working or it working. It's more of a collaborative thing. And this was actually the first time that our QA team worked together on a particular project with the development team. So, that was a useful thing. So, yeah, basically in summary, I just want to hope that you've seen that we've introduced a new testing framework. We're not just improving and maintaining the build system and helping improve the quality of our own system. We're helping you to improve the quality of your builds and allowing you to run the tests on your systems that you need to run. So, really, it's more of a focus on the quality of the build system and the quality of the output. So, yeah, give it a try, send us some feedback and let us know what you'd like to see. Yeah, get involved, I guess. So, yeah, any questions? Yeah, Peter. So, the question is, have I thought about running unit tests with BitBake? So, BitBake self-test is basically a unit test for BitBake. So, is that what you're looking for? Right. That's exactly what P-test is. So, maybe I didn't explain it too well. Right, yeah. I think the way we might approach that is in the P-test package, basically with P-test you can define whatever you want when you're doing a P-test build of the software. So, you could build it in a different way and store the alternate version of that software within the P-test package. And then when you select to install P-test, you'll be running that version instead. So, I don't know if we necessarily addressed that problem directly. Peter, it's not something I've been directly involved in, but certainly that's worth us noting. Is that something you're noticing in a number of software packages or just a few? Okay, all right. We don't want to test the libraries. So, we typically just start the way it calls to the libraries. Right, okay. Yeah, we'll definitely see if we can address that. I think there's a way we can, but yeah. Other questions? Alex? Just listening to what you're saying and to what Staffel was saying yesterday. Once you can actually do the testing on real hardware, it seems to be quite useful certainly to us to use that to test the hardware for subcontract manufacturing, QA reporting. It seems to be quite a good foundation to build the comm to be able to do that. Is that something we've considered? So, yeah. The question is have we thought about testing the hardware itself, sort of qualification of the hardware? So, I guess for that, we obviously we would need to have that software which is which is being used to test the hardware, but there's no reason why, you know, I wouldn't have thought that would be a particularly challenging thing for us to do. As long as we can cross build that software and run it on the target and look at the output, I would think that would be a pretty straightforward thing to do. And certainly when we were doing, we would be able to test on real hardware, we wouldn't be doing that for making sure the hardware works, yeah, absolutely. So, yes. On the question of real hardware, can you say any more about the actual sort of roadmap? So for like simple tests, would that be in 1.6? Definitely, we'll have a, we would absolutely have a basic ability to run tests on real hardware in 1.6. I don't know if we're going to get the full sort of framework to run, you know, the huge test rack completed in 1.6, but definitely you'll be able to run on a nominated piece of hardware. So, it's sort of in some ways in between simple and advanced, as long as you're only testing one piece of hardware, you think simple will be advanced in the sense that you could do quite a lot of testing on that one piece of hardware? Yes, yeah. Thank you. Because I think particularly on the sort of arm side, because you've got the flexibility in the IP, you really want to test on hardware, real hardware, you know, QMU, well, it's good for testing QMU, but sometimes it doesn't really get there for testing the hardware. So, that sounds interesting. Yeah, yeah. We certainly hope to enable all the tests that people need to be able to do on their hardware. So, yeah. Other questions? Yeah, you know, I had a quick look at Lava. I wasn't able to find a lot of documentation on it, and it did kind of seem like it was quite oriented to running on the Ubuntu slash launchpad sort of framework, whereas we try and be quite a bit more flexible with trying to run on any distribution and not be too tired down. But certainly they're doing the same kinds of things that we would look to be doing. I think that what we've done now is the stuff that we had to do that we weren't kind of rewriting something that other people are already providing. So, now we're into a phase where we're going to try and figure out where to go next. And certainly we would be looking at how other systems work, and particularly how well they solve people's problems or otherwise. So, yeah. Sure. Hi. Would it create a test report? So, can I run the test and then put it in the compatible form for your compatible test report? Something like that? Right. Sorry. The question is, can you produce reports as an output? So, what we get back is a log. It's a typical kind of log that you get out of Python unit test, which just says pass fail. You can certainly send that out, use it, reuse it, examine it. But normally the output of the test is just whether the thing succeeded or not. But certainly I think reporting is something we'll be looking at extending in the future. So, yes. Yeah. So, the question is, if you've got multiple binaries produced by as part of your unit testing, will the system be able to handle that? Right. So, yes. Because you're defining as part of the, with p-test in your recipe that is deploying a bit of software, you define what should be run as part of when it's actually running it on the target so you can run whatever, if it's multiple binaries or just one, you're in control of that. So, that wouldn't present any problems. It really depends on what output that produces. You might have to, you might have to wrap it in a script, I think, probably. Any other questions? Yes. You use this in the CI for the October project itself, right? Yes. How do you select the images, the images types of which you don't want? So, in the AutoBuilder, there's a stage beyond where it builds the image, which is the run to run, what's termed the sanity test, which is exactly this. So, I think it's hard coded. I'm not sure. I'm not an expert on the AutoBuilder, but I can certainly put it, you're in touch with. Yes. But then, this is exactly where problems may occur because of missing dependencies or because of it doesn't build correctly if a certain package is not present. Right. So, these are the, unless you try combinations, you're not going to catch that kind of missing. Well, I guess I would say we try and, the base images that we provide and that we test on the AutoBuilder are kind of as broad as we can get in terms of the recipes that we have within the base system. So, obviously, when you're running AutoBuilder on your own, your own things, you're going to be able to nominate which images get built. So, you'll be able to nominate also which ones get tested. So, sorry, does that answer your question or? Yeah. Well, alternative wrapping is a suggestion. Okay. What we do in built routes is we make random builds. Randomly select packages and build those. Now, we don't different by testing. So, it's all in the story anyway. So, this could be something that you can add to update the industry side switch on random selected packages and see if it still works. Right. Actually, that brings up an interesting point. So, one of the other things that I didn't mention that we would like to add in future and it could be 1.6 would be, there's a thing which we've, which is being called world image. So, we have the ability to build world, which is every bit of software that we support, right? And the next thing is to try and install all the packages that are produced into a huge image. I don't know if we could do that or we could just think it's all a random package or whatever. So, that might be something that would solve that particular problem. But yeah, certainly, it's one thing, with all of the stuff, it's one thing to be able to build it and it's another thing to be able to actually use the output packages in some form. So, yeah, it's a good point. Anyone else? Okay. Thank you very much.