 Hello, my name is Jung Hyuk-seong. I'm a second or so of this talk and I'm going to present today instead of the first person. The title of this talk is Smart Black Box Fudging of UDSK. It'll describe how to do UDSK Fudging well in the Black Box environment. So who we are? We are all Red Team and Blue Team members in Outcript. Outcript is a mobility security company and recently we focus on the automotive cybersecurity. We have conducted body tests and pen tests with automakers and tiers of priors. Also currently we are developing a photo specialized for vehicles. So in this talk, I would like to share tips and know-how that we have experienced during the test, during the Fudging test in the automotive industry. So first let's talk about Fudging tests in the automotive industry. Automakers often have no choice but to perform Black Box Fudging because the tiers of priors don't provide the calls to automakers. Also automakers should test the complete vehicle. Since about hundreds of issues are connected in the completed vehicle, it is very hard to get all issues source codes and instrument them and build them again for Fudging. So it's inevitable to do Black Box Fudging tests in the completed vehicle. However, Black Box Fudging for vehicle is not easy. First, we cannot do coverage guided Fudging because there is no source code. Second, we cannot request triage in detail because we can't connect into the inside of issues. The only way we can monitor status and obtain the information about the target vehicle is the width to port or harvest can lines. You know, we cannot do much things only using these lines. But we have to do in Black Box Fudging. In this talk, we define three Black Box Fudging challenges. First, test case generation. In Black Box Fudging, coverage guided Fudging is impossible. So we can't monitor code coverage achieved by the test case. They are hard to generate effective test case. Second, fail detection. In Black Box Fudging, there is no way to connect into the target by SSH T-Share or the bug port. So we can't directly monitor the Fudging process. Then how to monitor target and how to know also found issues. Third, reset. Fudger should continue to Fudging even if the target is there. In order to do that, Fudger should automatically initialize the target. But again, there is no way to connect into the SUT. Then how to reset or reboot SUT when it is there. Can you solve these challenges only using the harness can lines over the port? So this talk proposed how to do smart Black Box Fudging for UDS can. To do smart Fudging, we solved the previous challenges by using UDS features. This talk will be a practical guide for people who want to do automated Fudging tests in the Black Box settings. This is the overview of this presentation. So until now it was introduction. Next, I'm gonna explain how to solve each challenges. Test case generation, main detection and target reset. And finally, I'll conclude the talk. Okay, let's talk about test case generation first. Test case generation is most important part of Fudging. We consider three things to generate effective test case. First, it's efficient to generate the test case only for the available UDS service. Second, Fudging should transmit the test case complying with the message sequence. Third, Fudging should transmit multiple frames when the test case is large. To effective Fudging, Fudging should consider the above UDS can features. So I'm gonna explain the details of each item in the next slide. Before that, I'm gonna describe the basic rules of test case generation. For example, set the target issue and generate test case for each UDS service. It means that can ID and service ID are fixed and other fields are mutated. Of course, you can mutate the can ID and service ID, but it is not effective because most issues filter wrong can ID and service ID. There are 26 services in UDS and each services has their own service ID. Order doesn't need to generate and test the all UDS services because not all services are available in the issue. In my experience, usually about 10 services are available in the issue. So it is efficient to generate test cases only for the available services in the target issue and test only them. So before start fudging, Fudger should check the available services on the target. To check the available service, first, further send a valid can message of each UDS service. This valid message is a request that issue must send a response. Second, further checks the response to the request. Then further decide the availability of the service depends on the response. If it is a positive response, the service is available. Or if there is no response, the service is unavailable. And if a negative response is received, further decide depends on the negative response code which is NRC. It means that Fudger doesn't decide that the service is unavailable even if the negative response is returned. Some negative response are decided as available. I'm gonna show some examples in the next slide. First, this is the example of positive response and no response. If a positive response is returned, we can know that the service is available. It's very trivial. In this example, further checks diagonal stick session control service. When a positive response is returned, we can know that diagonal stick session control service is available. Second, if there is no response within a timeout period, further decides that the service is unavailable. Because there is no response to the message that should be answered, it means that the service is unavailable in the issue. Last case is negative response. As I said, not all negative response are determined as unavailable. Further should decide depends on the NRC. For example, sub function not supported negative response is decided as available. Because it means that service is available, but just sub function is wrong. So if we can fix the sub function value in return of positive response. However, service not supported, negative response is decided as unavailable. Because it means the service is not supported. So further should make a difference decision for each NRC. Next, further should consider the message sequence when it generates test cases. Some usage service have the message sequence. It means that there are some services that should be appreciated before the target service is requested. Further should follow the message sequence. For example, if further want to test write memory by address service, the test case should be transmitted after both diagonal special control and security assess are passed. If the further doesn't, one of the two service, if the further doesn't pass one of the two service, the target issue will ignore the write memory by address request. So the test case becomes meaningless. Therefore, further must know the message sequence of all UDS services. Next, further should also follow a multi-frame transmission rule, which is ISOTP. If test case payload exceeds eight bytes, the test case should be transmitted in multi-frames and it should follow the ISOTP rule. The last payload should be divided into multi-frames and further should transmit the first frame at first. Then further should transmit in consecutive frames after receiving the flow of control frame. If further doesn't follow this rule and just transmit the large payload, the target issue will ignore the test case. Until now, I talked about test case generation. Now I'm gonna talk how to detect a failure caused by buzzing test. The puzzle must decide pass or fail at each time whenever it sends a test case. We introduce four failure criteria. First, no response to the validity test. After the test case transmission, further sends the validity test to the issue and check the response. If there is no response within a time or period, it's a fail. Second, specific negative response to the validity test. If further receives some specific negative response to the validity test, it's also fail. But not all negative response are fail. First and second criteria are same as the way used in the services availability check. Third one is a diagnostic trouble code, which is DTC occurrence. Fuzzer periodically checks whether DTC occurs. If new DTC occurs, it reports as fail. Last one is usual specified can message occurrence. If testers specifies a certain can message as fail, the puzzle report fail when that can message occurs. So now I will describe the details of each criteria. First one is no response. Further sends a validity test after test case transmission to check target state. If there is a positive response, it's pass. But there is no response within a time or period, further reports fail. In UDS, the first time or period is 15 milliseconds. Next, negative response. If there is a negative response to the validity test, further reports fail depending on the NRC. As it is a service availability check, 13 NRC are decided as pass, not fail. This example is same as the service availability checking. Even if some function not supported the negative response is returned, it pass. But if service not supported negative response return is to fail. So further should decide depending on the NRC. Third one is diagnostic trouble call. If a new TTC occurs, it's fail. Further can detect the cluster of new TTC by sending lead TTC information request. If there is a new TTC, the issue will return the response with the new TTC information. If there is no new TTC, it's pass. But if there is a new TTC occurs, it will be fail. Last one is user specified can message occurrence. Tester can specify or searching can message occurrence as fail. Then if the can message occurs, further reports fail. But there is a precondition to use this method. Further must be able to monitor the can message to be transmitted. So this example showed an overview. For example, when we do forging test to the head unit issue, we can define the following can message occurrence as fail such as break press or accelerator pedal press or any other can message that have no relation with the head unit and can cause the dangerous situation while driving. It's a kind of custom rule. So Tester just specify some can message which prohibited occurrence when forging test is contacted. But as I said, it requires a precondition to monitor the occurrence of can messages or just should be able to monitor the can bus. So we've talked about how to detect the fail. When fail occurs, further should initialize the target. Now I'm going to talk about how to reset the target issue automatically. When fail occurs, it means that the target is dead or some trouble occurs. In that case, further should initialize the target to continue the forging test. If the forger cannot initialize the target, the forging test will be terminated or meaningless test will be conducted. Test can manually initialize the SUT, but it's a very tough task to do whenever fail occurs because tester should keep an eye on the SUT while the forging is in progress. You know, forging test is a very time consuming task. So people cannot stay during the whole time. So automatic SUT reset is required. So this is the reset overview. Left figure shows the example when there is no automatic reset and right one shows the example when there is a reset process. If further transmit the test case, even if the SUT is dead, that will be a meaningless test because SUT is dead. So we should reset or reboot the target when fail occurs. So right figure shows the example with reset. When forger detect fail, either it's cute to reset process to initialize the target. After target is reset, after reset, the test will be continued. Then how to reset? We can use two UDN services. issue reset and clear diagnostic information. First issue reset service can do reset issue. If sub function value is one, hardly set is performed. Hardly set is performing power on or start up the target, the issue. And if sub function value is three, subtly set is performed. Subtly set is just restart the application program. So in my experience, hardly set, I recommend hardly set, but sometimes subtly set is also works. And second one is clear diagnostic information. Service, this service clear all diagnostic trouble code data, which is DTC. You can specify the data you want to clear using the parameters. But we, I recommend to clear all DTC data by setting all parameters as FF. So in the slide, I just write zero four, 14, and three FF. That three FF means choose all data to be clear. So when, as I've said before, the project can detect fail by detecting DTC occurs. So when DTC occurs, issue the further detect as fail, and then further should clear that DTC to continue the next test case. So I'm gonna conclude this talk. In automotive industry, black box testing is often required. To do smart black box posing, further should consider the features of UDS-CAN as it is set. First, test case generation, further should check the available services and generate test cases for only the available services. This is more efficient. And the test case should be generated and transmitted with the consideration about the set of sequences. Also frame types. If a payload is over eight bytes, the payload should be transmitted in multi-frames. And there are four methods to detect fail. First one is no response. Second one is negative response. After the test case transmission, further will transmit the valid request. And if there is no response or some negative response is returned, then it will be failed. But not all negative response are failed. So further should decide depends on the energy. And third one is dynamic term code. And also user specified can message across a shoot can be failed criteria. Tester can specify some can message occurrence to be a fail. And in the reset process, further should automatically reset the target. Further can reset the issue by using issue reset and clear diagnostic information service. So when further detect a fail, the further should automatically reset by using both service. After reset is confirmed, then further should transmit the next test case. So this is end of my presentation. Thank you very much.