 Good morning, everyone. So my topic today is continuous integration practices on deferred upstream. I myself have come from NSP, semiconductor. So some people may wonder, as a chip company, why would you use some cloud or Docker technologies? The mainstream technologies used in chip is mainly in CI, or the develop ops. So this is the abstract of my speech. We will adopt some open source docars and then to put them together via some glue script. And then we will have some components and some hardware to install or deploy the system. So based on our experience, we can build such a CI system with minimum supporting resources and high reliable quality tracking and can also make the test available after five minutes. For defer itself, it has a shipable platform as the CI framework. Shipable, it is not public available. So that means it is not possible for customization or for general users because lessons is required. And also, it is very hard to dynamically change the test sets. And also, you will find that a lot of the test script or test stuff will be mixed in the same Git rebel. This is actually not allowed because when we release a pack to customers, we do not want to expose the underlying layers or some of the unnecessary stuff for users. And for test code, it is not in very good quality so because it is a kind of commercial version. So if you want to have the develop ops architecture to connect to such a commercial version will make some troubles. Even if you have some support from the companies, it actually will cause more problems. And also as a chip maker or as the board maker, what is the key issue is for shipable, it cannot run on the real board. You can only do it on simulator or emulator. So if you know simulator or emulator to maintain the on-call simulator is easier. But if to realize the CI, it will take more efforts and more time than doing the test on the board. So that means if just to do some basic checks on the simulator, a lot of issues will happen on the real board. So this problem will have deep impacts on the real applications quality left cycle. So usually while we are establishing a cloud architecture, we need to take user case into consideration first. So today's user case is how to upstream the BSP. So our user case is we will have the upstream task and then we will do local debugging, do some local function test. And then we will generate a pull request. And this pull request will have some integration test. And after the integration test has been completed, then we will finish the upstream. So this is a standard user case. If you have any questions, you can interrupt me at any time. So based on the previous user case, we will have some requirements from the user case. So the first is test on request. So that means there will be not a specific test set. The test set is based on the programs you are going to develop. So this test is on request. And the second is we will use common and user-friendly technologies. That means it doesn't require a very tough learning curve to learn some different technologies or difficult languages. So that means you will have easy access to the technologies which can be used in your programs. And the whole invocation process will be stateless because you have a lot of tasks to run. You need to set up the board. You need to run 100 cases. And for example, if to be exemplary, you need to test your apps in more than 1,000 boards. So that means if it has a state, you need to switch from one state to another state. So that will require human intervention. Then it will require a lot of human efforts to maintain. So that is why we require a stateless process. That is, as long as you send a simple request and then you will get what you want, you do not need to care about the process. Wait a minute. So based on the user case requirement analysis, we will come up with a system-level requirement which include, first, we need to have a system to configure the test scope. And the second is we will try to adopt open source framework. And also we need to have a task scheduler to make it stateless process. And also we need a reliable flash mechanism. For example, if you want to flash, you need to have some equipment to control your board. So that means once you have flash on a program, whether it is successful or not, so it needs to be reliable and also it needs to be developed delicately. And also we need to have the result process because we think that you can develop any apps you want and the test process, either it is from the signals to charge the result. So that will reduce the limitations for our developers. And last but not least, we need to have a storage system to store the logs. So what we are using is a FTP docker. So that means for the test results, we can store it in the dockers. So this is the architecture of the CI system. There are two layers. One is the user interface. And the second is the servers layer. So all the servers are transparent to users. And what we have used is a centric user interface. And there the blue shell docker to check all the servers and to get the input from different servers. And the one in the middle, the binary documents and the SRN log, we will put it on the servers. So such kind of architecture is based on our requirement analysis. And these are the dockers which we are using. One is the Jenkins docker. We are using it for CI system with database. And we have a FTP docker. We also have a Zephyr build docker. So all the compilation is realized via the Zephyr build docker. And also we have a task queue docker. So while you adopt open source framework, it's better to write less code on customized framework. Because for open source, there will be a lot of input from outside. And its lifespan is very long. So if you have created some new frameworks that will lead to a very complex system, which is difficult to understand, so it will take a long time for you to explain to the others why you do it like that. And the deployment of the whole system will become very simple. We need a SCM system, either it is GitHub or Bitbucket. And then we can build a local cloud, or rent some cloud in ECS. And on the cloud in ECS, you can deploy all dockers. Then we need a flash machine. And then for our board, it is supported by PyOCD. And then get the logs via UART. So the whole process is very simple, nothing advanced. For cloud in ECS, we can get open source. Or if we get very clear request, we can get some other solutions, but it depends on the evolution of your program. And the next is something special, because if you glue all the open source stuff together, you need to know how to glue them together. So this is about how to connect dockers. And some are suggesting using IP tables to use the IP or socket communications. But after our analysis, we think that it's not necessary to use big data. We can use the docker-shared volumes. That means for all dockers, it is to be mounted with the same volumes. So that means it can be shared in one PC. And the inter-operation of dockers, you can use docker exact to use the mount for each docker container. Once you have mounted that node, all dockers will copy the function from one container to the other containers. So all will be sent via socket. So from this, we can optimize the inter-operation system. And there is a concept of convention over configuration. So that means once you have defined the convention, it is a convention. And it will be very high efficient for practice. So another thing is we have mentioned we need to establish a test set. If you have studied the Jackens Pepland fail, you will find that for all the CI systems, shapeable, durable, it has the Pepland schema. So we hope all developers do not need to learn the schema. So that means it's necessary for us to develop some simple codes. For example, if developers want to have a test, then we just need them to know what they are going to test, what is the test route, as to the compilation, as to how to write, how to run the log. They do not need to know that. But for Jackens, you need to compute the whole process. You need to define the setup, define run, and define how to get the result. So we have developed some script. As long as you, well, the app is, you can run the script and generate the Pepland fail. You just need to define what you want to test. So you only need to have the knowledge of YAML file. You just need to know about the overall structure of YAML. YAML is very popular and easy to understand compared to SML and Jenkins. It has a lot of advantages. So if you need to have a local file system, I really recommend you to use that. So we used YAML's customization, because it doesn't allow include or exclude. That's why we set up an additional layer so that we can do some basic calculation on the board. And the overall system will be scalable because in YAML you have to use a script and you need to use specific grammar. All these things is included in the script. So you just need to include them. And the next is about the flash system. I have said that if you want to have building development or development of any boards, you have to customize your program because we are a chip supplier. How do you program? Nobody else would know. We need to provide a framework to any user. We have user interface programs. We have also developed PYMCUTK. This is a toolkit to program all of our companies. Chips, it supports IR, Kailh, GDB, and MCO, Xpercel. So we can call the command line of these systems. And you can also have double check after download. And you can also use J-Link or J-Bug to reset all the functions. All these things are included in the script and it's already open source. After all these defined processes, we can complete a whole set of BSP upstream process. There will be two parts. One is manual, the other is automated. For a manual part, you just need to develop the program on your branch and do your own tests and then create a pipeline file. You just need to define what you want to test. The functions of your development, what kind of testing needs to be done and then you can add that to the pull request. And for the pull request, we'll be assigned to SCM Jenkins trigger. All these will be done. And Blue Ocean will trigger a test request according to this and generate build command. And the build command will automatically upload the image to the FTP and then we will have a run commander and then pull binary and operate on the board and then send it to the Jenkins and upload the test log to the server. And the user will receive an email notification or any other notification you define to inform you that the test is done. So this is an example or configuration. Within this example, you only need to do three things. First, you need to inform where is the kit? Where's the kit? And then you need to define the regular compression and then what is the name of the test file? You only need to do these three things, very simple. When the test is done, it will create a sexual report. If you're familiar with Jenkins Blue Ocean pipeline, horizontally it means the sequence show and the other code direction means it's being done simultaneously. So you can define how many build can be least in parallel and you can also define the relationships with each other. So if it's past it, you will see a green mark. If it fails, you can see a red mark. So this is the log from Blue Ocean. Some tricky part because we are using URL to place the storage method. So the Jenkins is only storing the URL. It will not store the test result in the database, so we can reduce the burden to the database. And similar with the log in the law part, we just put the URL and we just inform about the test result. If this fails, you need to check out the log. You just need to link through the URL, which is also reducing the load for the database. So these are the benefits of using the system. For every pull rest, we can do that on real board and the test scope can be customized by the original developer because you are the developer, you generate the feature, you would know what are the things that need to be tested. But of course regression test can run for all the known cases for, the other things you can customize yourself for the whole process. It's still is async. You just need to send a pull request, then your duties are done. You just need to wait for the result. You can switch to other tests. You don't need to care what are the other interventions you need to do. It's not necessary. You don't need to learn many things. This is the best part of that because for a developer, they need to focus on their projects. If it is for the bottom line software, they just need to know, they don't need to know how the cloud system is structured. It's not necessary. So you just need to link three things. First, you need to have a basic dark knowledge which takes about one hour and then get and then you also need to understand some simple YAMAS schema. YAMAS schema is also very simple. It takes only about 30 minutes. So we don't need to have manual maintenance for the whole process. This is very good because operating on a stable open source architecture. And we are combining everything together in a very rational way. You don't have many things to do. So only when you have more demands, you need to duplicate the system. The benefit of Docker is that you just need to expand it. So we use conventional configuration when you need to have more Docker documents, you just need to set up your naming rules. You don't need to do many other things. I've said that if you don't need have hardware scaling, it takes only about five minutes to expand the setting. So these are the future works that will be done. Zephyr is an open source tools. So its purpose is to IS 262 evaluation. So all the parts should reach 100%. We have one progress you can use in GDP to get all the coverage data and have real-time analysis. Because only on real boards you can run all the test cases and do coverage cases. In simulation it's very difficult because it's not possible to assimilate all the COC. This is what we are going to do next. Many users want to get the test status of the boards instead of just seeing the bugs we're reporting. They want to know about the boards on different LTS, whether it's stable, whether it can pass some tests, whether they can use it that way. They need to have a test maintenance system because Zephyr is using benchmark test reel. So our next action would be bring our test reports and test IO and combine them together. And this is Zephyr's working group. So we're going to open all the scripts. Basically that's all for my part. If you have any questions about DevOps or Zephyr you can raise your question now. We still have some time. We still have five minutes, please. So you cannot add it any time. You can add because Jenkins can allow extra node. You just need to add IP. That means you still need to do that in the interface. Yes, it has backend UI. We call that API. You can operate at the backend. You can add a node there. So these are random nodes. These are the nodes for the physical machine. You can also have the nodes for the virtual machines. My second question during the commission when it's stuck is it possible to enter some PB2C? How do you enter that? Because for every machine it's running on the Docker. If you want to do the debugging you can log in onto the server to see the actual status of the server in the log. So actually there's no authentication control. You can see everything. There is some control. If it is machine, you need to turn on the authentication. For the end user, they don't want to have the authentication control. It's difficult to control because normally we want to have authentication because we don't know what the users are doing. We don't want users to do something because we want the system to be transparent. You don't need to worry that we have done something wrong because we have provided all the logs to you and it's very easy to see that in the log. Because if we're making the system, we're doing the same thing as you. We're providing the mixed command lines. I understand. It's very difficult to balance, right? Thank you. I want to ask you. You said that the user can go to the specific nodes for debugging like EAC and you can go into the container. Is that safe? If I give you the authentication, you'll be a trusted user. So that means that not everyone can have that. So you need to manually apply for that. Yes. Actually, we have considered about the CI system. Our initial purpose is that all the developers are doing and we will just follow suit and we will just provide the command lines to you. So there will be nothing intransparent. The only difference is the machines doing things and humans are doing things. We don't want to have an artificial intervention. If we open source that is because we have done a lot of tests we can ensure it's reliable. You can trust these modules have very high reliability. For you, you don't need to doubt about any issues brought by our system. You can replicate the same thing. So there are no new problems caused by that. So we don't need the user to debug that. What if the user wants to add a new test case? It's very simple. In your PR, you just need to create an pipeline file and inform you about the debugging path. I mean, if we add some test files, the environment is different and we can't do any adjustment. Because for typical case of Zephyr, it's not possible to have different environment. If it is inconsistent, you won't be able to enter the Zephyr because you can only use the CMAC control method. The other methods wouldn't work. I mean... Yeah, I'm talking about tests. So when you are running the test, when the board was downloaded, so if you are going to write a script, then you can judge. That means I will give the log to you and then you can judge based on the log, whether you pass or not. But sometimes the log is uploaded where you are. So if there is some issues, you need to use some GTAG to debug. We suppose that it shouldn't be fixed on the interface, Leon, otherwise you'll get a garbage out after garbage heap. I have a question for Jenkins from your slides. The test case is running invisibly or running in parallel. So for the master, is it like kind of overloaded for the Jenkins itself? It has the load balance. It depends on the course of the CPU. If you have two course, then you can have two cases to run. And then if you have 16 calls, you can have all the 16 calls enabled. So if the task has already gone beyond the stream of the master, then it will just wait there, wait automatically. So that is in Jenkins build. When doing the execution, it cannot control the boards. So what we done is why we have the task queue so we have used server server to generate a task queue and then put it in a program machine. Then the program will tell the task queue that the last one has been finished and then we will have the next to run. So that will make the whole process date list. Any more questions? If no, thank you.