 Thank you for the introduction. Good afternoon and welcome to this talk. It's a pleasure for me to be here and present some of the results that my colleague and I have achieved under supervision of Professor Jean-Pierre Seifert. As you can see on the slides, some boards are underlined, strong, and no mathematical model. During the talk, you will see why these boards are important. Nowadays, it's widely accepted that integrated circuits are subject to piracy and overbuilding attacks. Security measures relying on conventional key generation and key storage can fail. To address this issue, puffs have been introduced as promising candidates. The notion of puff is inspired by the characteristic of human fingerprints. So we expect that a puff is unique and unclonable. From a general point of view, a puff is input to output mapping, a mapping from challenge space to response space. Regarding the size of the challenge space, we can define a strong and weak puff. In this talk, we mainly focus on a strong puff whose challenge space is exponential. The question is, are they really unclonable? There are several attacks proposed in the literature against different types of puff. We are mainly interested in machine learning attacks where the attacker collect a subset of challenge responses and try to find a model representing the challenge response behavior of a puff. Two types of machine learning attacks can be found in the literature. Empirical algorithms and provable so-called pack learning algorithms. In the first case, the attacker collects a subset of challenge responses called a sample and give it to a machine so that a machine runs an algorithm and we expect that our desired model will be delivered as long as this is our desired model where only a point is not correctly classified. But it can be the case that no model is delivered by the machine or a less accurate model can be delivered. It can be thought that if we increase the size of this sample, we can have our desired model. But it can be the case that, again, no model is delivered or a less accurate model is delivered. But it's not the case for pack learning algorithm. In this case, we can calculate how many CRPs is required before running the algorithm. And we can define the level of accuracy and confidence, the probability that we want to have our model after the learning phase beforehand. So if this set is given to the machine, for sure, we will have our desired model. It is guaranteed. Having all this information about pack learning and puffs in our mind, it's time to have a look to see how a pack learning attack can be launched in practice. But before that, let me just introduce two other notions about pack learning. This is a new part that we have proposed in our chess paper. Assume that I have access to these set of challenge responses, and I give it to this machine. It's only sufficient to generate this model. As you can see, this model is less accurate. To be more precise, the accuracy of the model delivered by the machine is only slightly better than 50%. Thanks to the papers published by Freud and Shapir in 90s, pack learning and weak learning are equivalent. That means by applying boosting techniques, we can convert a weak learner to a strong learner. How it works, as you can see on the other slide, we tell the machine, OK, no worries. Just focus on the points that are not correctly classified. Step by step, take into account the model that you have found until now, and then try to improve it. At the end, we will have our desired model. In practice, it has been shown that whenever the attacker can find a model for the functionality of the path, it's easy to establish a proper polynomial size representation of the path, and then find a polynomial time algorithm to pack learning. All this process is called launching a pack learning algorithm in practice. But what happens if such model cannot be found? For example, in the case of biostable ring paths, a biostable ring path consists of even number of stages connected in a loop. And in each stage, we have two NOR gates. By applying a challenge, one of these NOR gates is selected, and when the oscillation in the loop stops, we can read the response, for example, from this point. Unfortunately, of course, from the point of view of the attacker, no precise mathematical model for this path has been found so far. But does it mean that we cannot launch our pack learning attack? To answer this question, let's have a look again at the definition of paths in general. As we said, a path is a mapping from the challenge space to response space. And for non-families of paths, it's a mapping over the field F2. Now let me introduce the linear Boolean function. A linear Boolean function is shown here. For example, if we know the response to one challenge, one input, and we know the response for this challenge as well, we can calculate the response to this new challenge. But is it the case for paths? Can we predict the response for this new challenge? Of course not. Let me be more precise. The only Boolean function over F2 is parity function. That means we can easily prove that no path is a linear function over F2. OK, so what does it mean? The consequence is that when applying a challenge to a path, not all beat positions are equally influential on determining the response of the path. That means there are some influential beats. But how many influential beats can be found for a path? To determine this, we should take into account the notion of average sensitivity. Average sensitivity is defined as follow. We have this probability, and this probability is the probability that we have two different responses for these two challenges. The first challenge is chosen uniformly at Randall. But the second one is obtained by flipping only one beat of the first challenge. Oops, sorry. And the salvation is over all beat positions. Freud's Friedgut relates the notion of average sensitivity to the number of influential beats. Now we can compute how many influential beats we have for our path. Let's have a look at this example. Assume that we know the first beat position is influential. That means if the first position is set to one, the response of the path is always one. So we can predict that the response to the third challenge is one as well. But what happens if we don't know which beat position is influential? This problem is well-studied in machine learning theory, and it's called learning Huntas. Huntas is a Boolean function whose response is determined by a set of beat positions. For example, in this case, we have a one Huntas. It's proved that K Huntas can be pat learned if K is a constant value. But is it the case for paths? We can answer this question at least for bias-stable ring paths. What we have learned from practice is that this is the case. For example, the experiments done by Yamamoto et al. shows that the number of influential beats is five. We have repeated the same experiment and found seven influential beats for our bias-stable ring paths. So we can conclude that the number of influential beats is a constant value. But to support it and to be more precise, we should compute the average sensitivity of our Boolean function representing the bias-stable ring path. This is exactly what we have done. As you can see, we have implemented 4-bit bias-stable ring path to 64-bit bias-stable ring path and compute the average sensitivity. As you can see in this table, for 64-bit bias-stable ring path, the average sensitivity is 5.17. And at the end, we can conclude that K is a constant value. Now it's time to put all these parts together and solve our puzzle and launch our pack learning attack against bias-stable ring path. We have collected 100,000 CRPs for our two different experiments, and we use open-source machine learning tool that is called VECA. In addition, it's known that if there is an efficient algorithm for learning decision trees, for learning decision lists, for learning monomials, these algorithms are efficient for learning hauntas. Therefore, we can use the off-the-shelf algorithm provided by VECA for learning decision lists, decision trees, and monomials so that you can see the attacker can easily launch this type of attack. Moreover, we have used other books provided by VECA, but any type of boosting can be used in our framework. Let's have a look at some of our results. Here, I only show you the results for monomials and only for bias-stable ring path. However, you can find some more results for decision trees and decision lists and for twisted bias-stable ring paths in our paper. In the first row, you can see the accuracy of the model without applying the boosting technique. And in the last row, you can see the accuracy of the model after 50 iterations of boosting. As it can be seen, there is a significant increase in the accuracy of the model in both experiments. Moreover, if we use more sophisticated representation, for example, decision lists, this accuracy is increased to about 98%. As a conclusion, I would like to add that in our successful attack, we use and take the advantage of spectral properties of a Boolean function and we use boosting techniques. In addition, we believe that new metrics for assessing the security of paths should be defined. For example, in this paper, we propose the average sensitivity and we believe that it's a very good metric for assessing the security of paths. Finally, let me add some comments about launching pack learning attacks. If you want to launch a pack learning attacks, it's quite easy. The first step is quite critical. You should find a proper polynomial size representation for your path and then define the level of accuracy and confidence that you would like to have and then try to find a polynomial time algorithm to learn this representation. And as we have shown, you can use even open source machine learning tools. This makes the life easier for attackers. These are some of the references that I use in this talk. Thank you for your attention if you have any question for me.