 And the first talk, the title of first talk is the Interpose Puff or I-Puff, Secure Puff design against the state of the art machine learning based modeling attacks. So, and the authors of this paper are Funkai Nguyen, Yadur Gassahu, Cheng Julin, Khalid Mahboud, Uli Jhumayur and Martin Van Dyke. And the talk is given by Ha and Cheng Lu. Okay, thanks for the introduction. Hello everyone. Today we are going to talk about our research, the Interpose Puff or I-Puff. It is Secure Puff design against the state of the art machine learning based modeling attack. This was conducted by Ha, Duga, Cheng Lu, Khalid, Uli and Martin. So this is the outline of today's presentation. I will start with the motivation of our work and then the evolution of the arbitral puff based strong puff and then a very important concept for reliability based machine learning attack that is short term reliability. And then we will show you that our understanding of this attack in term of why this attack works and when it will fail. And based on that key insights, we introduced the design of Interpose Puff and this can be secure against all state of the art machine learning based attack. So I'm sure many of you are familiar with the concept of puff. So essentially a puff is a secure primitive that takes challenges as input and it generates responses as output and it is based on process variation. That's why it's unique on every single device and it can be used for device authentication and the key generation. In general, we classify it into two categories. One is weak puff, the other is strong puff. The difference between these two are the size of challenge response pairs. So if it's very small, then it is weak puff, otherwise it is strong puff. On the other hand, when we talk about the attack, then some researchers have proposed some attacks. So we classify it into two categories. One is a classical machine learning attack. The other advanced machine learning attack. Essentially classical machine learning attack exploits the information inside reliable challenge response pairs and the advanced machine learning attack exploits the noise information inside noisy challenge response pairs. And here the noise is introduced by environmental noise during puff evaluation. So based on the existing attack results, we further classify strong puff into three categories. One is broken, but lightweight puffs, for example, outer puff, external outer puffs, and some puffs have no security proof. So they are security unclear to us and some puff has regular security proof but it is very large in hardware. So our design philosophy like that, we want to take external outer puff as a starting point. Why? Because it is very lightweight. The security of external outer puff does not rely on any digital computation or its interface, and it has a very precise measurement model to describe its behavior. That's why we can use this model to analyze its security very rigorously and based on the existing results, we know that it is secure against our classical machine learning attack but not secure against reliability based machine learning attack or advanced machine learning attacks. That's why we will build on top of external outer puff to introduce inter-puff puff that can be secure against reliability based machine learning attack. So this is our philosophy and the motivation in this work. So talking about some history that outer puff designed like this, it actually compares the delay difference between two paths based on the comparison result it generates a response speed either 0 or 1 and there are many choices of paths and these paths are determined by its challenge speed. That's why it has an exponentially large challenge response space. That's why it is a strong puff but this puff has a linear model that's why it can be easily attacked by machine learning algorithms. So someone proposed to XOR the responses of multiple outer puff instances together to have XOR outer puffs. So this puff so far it is secure against all classical machine learning attacks but it is not secure against reliability based attack. So this is a design of inter-puff puff it is essentially a composition of XOR outer puffs. It has two layers of XOR outer puff on top layer so we feed the challenge vector into the top layer XOR outer puff and then generate one bit response and this response speed will be interposed into a challenge vector and then fed into the lower XOR outer puff and then generate the final response R as the output of IPUFF. So to compile these three designs the outer puff essentially needs to rely on the delay difference which is a delta here and you will see this delta very frequently later in our presentation. And XOR outer puff also has its pros and cons in terms of security and mathematical models. And the inter-puff puff so we develop one formula to map the behavior of IPUFF to the behavior of XOR outer puff so the formula is described over there and this formula is super important to us because using this formula we are able to map all the existing security result of XOR outer puff to IPUFF so we don't need to deal with our classical machine learning attack anymore we only need to solve or defend against reliability based machine learning attack. So we need to understand the concept of short term reliability what does this mean? It means that given a puff and a challenge you can measure the challenge many many times and get many many responses and then you can calculate its reliability. And if one challenge is very reliable then we see that the challenge has a delay difference delta which is very far away from zero so it is not very sensitive to the environmental noise. And if the challenge is not very reliable or noisy it means that the delta is very close to zero and it can be easily flipped from zero to one or one to zero. So this leaks information of delta that's why we are able to explore such information to attack outer puff or XOR outer puff and now I will introduce you how to explore such information to attack. Thank you Chun Lu for the first part. My name is Ha and I'm going to explain to you about the most powerful attack I mean modeling attack on XOR outer puff all known as the backer attack and explain why the eye puff can defeat the backer attack. There are several important points we need to know to understand about the backer attack. The first one is we can describe the behavior of outer puff instant using the linear delay model. So then what does it imply? It implies that at the adversary you care how to find the vector W. The second one is the CMAES algorithm is used in the backer attack. So this is the heuristic modeling algorithm. It has many generations or iterations like we can see in the slide. Each iteration, many approximated models W have are generated and somehow the adversary can measure the similarity between the approximated models and the target model. The good approximated model are used to generate the next generation keep doing this process. Eventually we can have a really well we may have a really well approximated model like we can see in the slide at the generation 6. And the third one is we have to understand how we can perform the backer attack on the outer puff because the backer attack on outer puff just directly apply the CMAES attack on outer puff to the actual outer puff without any modification. This is really surprising things in my opinion. I mean very powerful why? Because it can help you to build a model for every outer puff instant separately. It makes the backer attack very different from the classical machine learning attack where all the model of outer puff have to build at the same time. So for a given outer puff, the adversary needs to collect the set of the challenge response pair like Q we can see over there based on the reliability information for each challenge it split the Q into two subsets, the noisy one and the reliable one. The set of the Q and the noisy and the reliable set they are very important because it helps to measure the similarity between the approximated model and the target model like we can see here. So for a given model, the given model is the epsilon one and the w hat one. We can build its own noisy and reliable set based on the challenge information and the epsilon like we can see in the slide. So this is how we can measure the similarity between the approximated model and the target model. We can do it for every approximated model and then we do know what are the good approximated model and what are the bad one. We discard the bad one, keep the good one, generate the next generation. Keep doing this process. Eventually we may get a really well approximated model and actually the attack works really well in the experiment in the real attack. Like I mentioned you before, Becker can build a model for every ArpectorPOP instance in the actual ArpectorPOP separately by just applying the Becker attack on the ArpectorPOP without any modification. So the point here is there is no theory developed to explain why Becker attack works and how we can defeat the attack. Actually it took us like more than three years to understand and develop the design. Okay, so to understand the Becker attack the very important point we have to remember the first one is if ArpectorPOP is noisy then the actual output is also noisy. So what does it imply? It implies that the reliable set of the actual ArpectorPOP QR is shared among all ArpectorPOP instances. The second one is the noisy set QN of actual ArpectorPOP is always larger than the noisy set of ArpectorPOP. Like we can see in the slide. Okay, now we have the following analysis for the Becker attack on the actual ArpectorPOP. Suppose that we have 10 actual ArpectorPOP and the Q10 is the largest noisy set like we can see in the picture. The first one is on the model, on the approximated model in the CMAES are the model of ArpectorPOP. This is the first one. The second one is approximated model can only convert to ArpectorPOP instances. The first one is by its nature CMAES maximize the matching or the similarity Q of the approximated model and the Q of the actual ArpectorPOP. Put everything together, it implies that the CMAES has to convert to the ArpectorPOP 10 because the Q of ArpectorPOP 10 is the best representative of the Q of the actual ArpectorPOP. Like we can see in the slide. So if we turn the Q, now the Q3 may become the largest noisy set and now the CMAES will convert to the ArpectorPOP 3. So by this way, we can build a model for all ArpectorPOP instances in the actual ArpectorPOP separately. Okay, to make the factor attack fail, it is very easy. We can implement the majority voting at the end of the output of ArpectorPOP 0 into actual ArpectorPOP like we described in the slide. Clearly that now the Q0 is always smaller than the Q1. So what does it imply? It implies that the Arpector Attack cannot build a model for the ArpectorPOP 0 but it can build a model for the ArpectorPOP 1. What does it imply? It implies that now we do know how to defeat the Arpector Attack but this type of the design is not secure. So we have to come up with the iPop. This is why we are here. So the iPop can defeat the Arpector Attack because of the following reason. The first one is we can prove that the information of ArpectorPOP instances in the actual ArpectorPOP presented at the iPop output is less compared to the ArpectorPOP instances in the y-extra ArpectorPOP. What does it imply? It implies that the Arpector Attack cannot build a model for any ArpectorPOP instances in the x-extra ArpectorPOP. The second reason is it cannot compute the noisy set and the reliable set for a given approximated model because why? Because it cannot compute the delta. Why? Because it cannot compute the output of the x-extra ArpectorPOP. Now put everything together. We do understand why the iPop can defeat the Arpector Attack. Surprisingly, the design is very simple one. We do not need to use any interface. Nothing. And we can prove that the iPop can defeat the all-known classical machine learning attack. And now we also show that it can defeat the Arpector Attack as well. So in our paper, we also have many other contributions like we proposed the enhanced Arpector Attack. We also prove that the Logistic Regression Modeling Attack on x-op-POP is the best attack. And it is not applicable to the iPop. So what does it imply? It implies that maybe we have less number of ArpectorPOP instances in the iPop compared to the one we propose in our paper, in the current paper. We also public the... I mean, the FPGA Implementation Code for the ArpectorPOP x-op-POP MI-POP. We also public the attack algorithms. And we also have the detailed tutorial online. So this slide concludes our talk. We have two things to remember. The first one is we understand how the Arpector Attack works. And we also propose a new lightweight puff design which secure against all the state-of-the-art of the modeling attack. Thank you for your attention. We are open for your questions. So we have time for one or two questions. Is there any question? No? Function. Just do rounds. Sorry, I cannot hear you clearly. Can x and y, they need to be different. Can you not do rounds with just having one ArpectorPOP x? Sure, we can. Actually, in our paper, we propose that x should be one. And y should be nice. But actually, we may have, like, x equal to one and nine may be equal to six or seven. Are you okay? No. I think what he asked is that can we reuse x or ArpectorPOP as the lower layer, right? So, essentially, we cannot. Because the size of challenge input is different. See the first upper layer, it takes n-bit input and the lower layer takes n plus 1-bit input. You can always design n plus 1 input for x and simply tie it to a constant for the first round. Then you can have two rounds. Yeah, it might be. But in our suggested parameter, we don't consider that x and y should be equal x. Actually, x just needs to be greater than one. And y is where really security comes from. Because you can also see our formula between iPOP and X-ArpectorPOP. Then you can see that the equivalence is that this is a formula like y plus x over two. So, essentially, all security comes from the lower layer. Any other questions? Okay. Let's thank the speaker again.