 Welcome everyone, thanks for joining. Now please allow me to introduce the Agile Extended Test. This introduction will be carried out according to the following engine. Contacts in turn are all real and features. Features mainly includes the following four aspects. First is it has wide test tower edge. Second support multiply test method. Third is it supports free selection of test suits to draw. And last is it could automatically generate unified reports. After that comes the topic about how to draw a test and how to draw a test suit of yourself. First let's take a look at about the overall description of what Agile Extended Test is. Agile Extended Test is a supplement to the existing CIT test in Agile. It has a wide test scope include kernel OSS service app pp etc. It assists to ensure the stability of Agile UCB platform. The position of our Agile Extended Test in the CIT showing in the figure. The previous CIT process was first release then build, deploy and test by menu. However the current Agile Extended Test contains three steps of build, deploy and test. There are four automatic executions. Next let's take a look at the future about Agile Extended Test to learn more about it. First it has a wide core Agile test. The test of project or a test suit include kernel OSS service and app pp. The test types of its test suit cover functional test, stress test and batch benchmark test. Its test suit source include p-test, field goal, upstream and etc. Secondly it supports various test scenarios as follows. First is Agile Release Test and second is Parallel Audit Code Test. And last is verification after code modification. To support about three test scenarios, we also support a variety of methods to run this test. Mainly include the following methods. First for automatically triggered by Agile Release. This test method is ingraded into Agile CR, which supports testing of various boards. Secondly triggered by menu added larger in such case, a larger modification would be required to run the test. The third way is to completely run it menu. Agile Instant Test also supports free selection of test suits to run. You can specify a group of test suits you want to run. There are many ways to classify a test suit. According to the test object of test suit, there are kernel test, OSS test etc. According to the type of test suit, there are stress test, function test etc. According to the type of source, there are py-agile test, p-test etc. You can also customize the group as you wish. Or you also can specify to run a single test suit. It fits for the test suit that you want to investigate reason for its future. In addition, it also can automatically generate unified report. After we start the test, the test framework will run each test suit. In turn, I visualize the log of each test suit to generate a test report. After all of the test suits are executed, a summary report will be generated, and all test logs and reports will be packaged into a single zip file. The point is for the test report will be uploaded to the SHOT director or AGR. The current AGR SHOT director looks like this. Following is an example of our test report generated by the framework after the test was performed. As you can see, there are two pictures. The picture on the left is a summary report of all test suits performed. The status indicates whether the test is successful or not. The information on the second line represents the total number of test suits performed in this test, the number of parts failed on the script test suit. In details, the status of each test suit in this test, as well as the number of parts failed on the script test will be listed. The picture on the right is a test report generated after the execution of a single test. Let us take the test report or openSSL test suit as an example. The status information on the first line indicates whether the test suit is passed or not. The number on the second line indicates the total number of test cases performed in this test suit, as well as the number of parts failed on the script test. Each result of our test case will be listed in details after the execution. Next, we enter the part of how to run the test. The first method is for automatically triggered by HL release test. The release triggers the test and then automatically compiles the image to be tested in the October completion environment. During the completion, the IGL ESAN test test suit will be compiled into the root file system. In the LavaBest test environment, the newly compiled image will be used to run the test. Once the test run is completed, the test framework will automatically analyze the test log and generate the test report. The test report will be uploaded to the IGL Shared Directory. There is an address of our IGL Shared Directory below, and it's directory structure is shown in the figure as well. When we submit a file to the Shared Directory, it will create a sub directory under it according to the timestake to start files. In the test, all reports generated during the test will be packaged into a zip file. The name of the zip file will be dynamically generated according to the version number, both type and timestamp of our test image. The contact in the zip file is shown in the blog data diagram, which manually includes a summary report or all test suits, and each test suit will have its own sub directory. In this directory, there are log.zip and report files. In the log.zip file, there are original logs generated by the test suit performed. Report.aktmr is the test report for the test suit. Currently, all the reports in theaktmr format are supported, and we will consider generating reports in other formats in the future. Next, another way to run tests is by manually edit a lava job definition. In this model operation, two steps are manually performed. Step one requires the test to manually compile the image to be tested. In the step two, you need to manually edit a lava job, and then you can perform operations such as run tests. Let's take a look at our detailed steps. The first step is to manually compile the image that you want to run the test in. Here are the general steps of completion. It should be noted that the future HL test has to be added when executing the source command, so that the content relented to HL1 test can be compiled into your image. After the completion is complete, you need to upload the completed image file to the shared directory, which we will use by LavaNet. While running the test, we use the test framework of Lava. The overall structure of Lava is showing the figure, including three parts. Master worker and DOT. Where DOT is a device that you want to test with? To run the test, we need to manually add a lava job definition. We add lava job on the page as shown in this figure. For the detail about adding lava job definition, you can refer to the Lava official website. There we also have an example about lava job you can refer to. This is three snippets of lava job definition. In fragment one, the type of the device and the job name are defined. In fragment two, define the location of the image file will be used in the test. That is the location where the image you just compiled would be placed. In fragment three, it manually defines the location of the test file. You want to run the test. That is the location for the YAMA file. Next, we can take a look at the details about the YAMA file. The YAMA file is already added in QA test definition. You can just call it in the lava job definition. In YAMA file, it will call the scripts or HL extent test, which is the file showed in the right figure. It will perform the specific test operation. This file has also been added to QA test definitions. The last execution method is to run it manually without using the lava test framework. The process is divided into four steps. The first step requires you to manually compile the image to be tested. Step two is to deploy the image on the board. Step three is to run the test on the board. And lastly, you need to check the test report on the board. Next, let's take a look at the details about the error sticks. First is to compile the image you want to test. For this part, you can refer to the program section. And for the step four, too, you will need to deploy a compiled image onto the board where you want to run the testing. For the specific deployment method, please refer to the HL documentation. When your image is deployed, you can observe the following structure in the root file system. As we can see under the user bin, it is the command for HL test. And under the directory of user HL test was all our test suite. After confirming that the deployment operation is correct, you can run the test on the board. All our test suite can be executed by running the command HL test. Or you want to run a single test suite, it can be executed by the following command. The test name here should be replaced with the name of the test suite you want to run. If you want to print a detailed log at runtime, you can add VS permit after HL test the command. If you want to know more about HL test, you can execute the command HL test with H. After test was executed, there are two ways to view the test result. One is to view the log on the console, and the other is to view the report on the board. The position of the report file on the board is shown in the figure. It is under the directory of all raw HL test logs. Now you know how to run a test, and you may want to add your own test to the existing test framework and run it with the command of HL test. The location to add a test file in the root file system refers to the following figure. It is under the user HL test. You may need to create a test directory for your own test, and add a test file in it. Take a deep test for AFCOMO model as an example. Briefly, the AFCOMO sub-director will be created first, and then the runTest.py and init.py files will be added into it. Next, let's take a look at the detail about this tool file. You can see the test script of runTest.py for AFCOMO below. In the test file, the importpy test must be here. That is a simple test. For example, it has only two test cases. The first test case is test whether the model AFCOMO is installed on the board. The other test whether the model is being used or not. And the init.py file is empty file. This is the file required by the test framework. Now, after adding two files, we can run the test. The execution result of a simple test is shown in the figure below. We can see that both test cases are successfully executed. No, you shouldn't have to add a test suite for yourself. And if you like, you could also submit your files to our open-source repository. And you are very welcome to contribute. The repository link is as follow. Thank you, that's all I want to talk about. Thanks for your listen.