 Thank you for the introduction and thank you very much for having me. Oh, where's the screen? Okay, so we try once more. Thank you very much for the introduction. Thank you very much for having me and also thank you very much to Karthik, who was willing to work on this, even though he had a new job already. So let's get right to it. What is a path? You heard that before in the previous talk, but here we will be having a slightly different perspective. So as before we have a stimulus, we apply it to a path object which is somewhat random due to random manufacturing variations and out of that we get a response and we have two easy properties that we want to have. So one is it needs to be easy to evaluate and hard to predict. So this sounds really like a great thing to have in hardware crypto. So let's go on and use it. So one of the most popular paths is the so-called SRAM path. So here you have the SRAM in a non-institutional state and if you power it up, then you have a random fingerprint in there. So you have your system on chip, you have some ALU, you apply the stimulus and out comes the somewhat random fingerprint. So this is due to the construction of how an SRAM works, it's already a binary response. So the basic idea of these types of paths are that you have a key duration from a response instead of key storage. So the advantage is that if you do the delayering and the optical analysis of the SOC, you cannot reveal the key because if you do that, you just see the SRAM, but you don't see the key. So the disadvantages are that this response that you get out of the path, somewhat noisy, so you need some error correction and helper data to work around that, but let's not focus too much on that for now. So what about paths and probing? So now we have the same SOC as before and now we have some invasive probing on some data bus or the probing on the path directly. And well, this is just the example of invasive probing, but of course there are also some other physical attacks and thanks to Shahin, there's also some work that we can look at and study. And I think what is quite interesting here is that the previous approach usually was to make a path as small as possible and then put it somewhere in your system on chip. But of course in that case, what about the rest of the SOC? And I guess it's not too surprising that this is a misconception. So if you use a path in that scenario, you're not secured. And thanks to Shahin, we have a lot of practical proof that this is indeed not the case. So it's not claimed and I think it's also not designed to resist attacks if you have an SRAM path and then you probe the data bus. So we have different perspectives here. So one is on the path, the other one on the system. So most paths are not protecting from live physical attacks when you probe the data bus. They are simply not temper evident and then you need some other counter measures such as meshes. So what's the idea of a temper evident path? Instead of making the path as small as possible, we now make it as big as possible and make it cover everything. So now when we do the probing, we destroy parts of the path and this in turn causes the key derivation to fail. So some underlying assumptions are of course that the path somehow enclose the system. It's somehow sensitive to tempering and of course it needs to protect itself also. So what's underneath needs to be protected. So this is what is called a temper evident path and there are not too many examples. So I just listed three of them. One was also presented at chess 2006. So it's actually quite dated but I still like that a lot. So the coating path, I can really recommend you to read the paper. So when it comes to key derivation, we have now two scenarios. So one is we have the path response which is binary for most paths and then we have the temper evident case. Let's talk about that in a second. First, let's get back to the binary case. Here, an assumption that is typically made is that the path bits are IID. And if there is some bias, we can do some de-biasing. And if there is noise, then we can apply some binary ECC construction such as the fuzzy commitment or code offset. But now if we look at the almost continuous or quasi continuous case for the temper evident path, the question is what is the best approach to use here? So as an engineer myself, of course you look at it and okay, so do we just throw an ECC at it? No, we start using a quantization first. So there are two well-known examples for the quantization. So one is the equi-probable and the other one is the equidistant. For the equi-probable quantization, what you do is some kind of histogram equalization. So each one of these intervals occurs with equal probability. So now what the authors of the coding puff did was to assign a gray code to neighboring intervals such that the bit difference between neighboring intervals is always just one bit. But as you can see already, if I go from the very left interval to the very right interval, the bit difference is also just one bit. So if I apply the equi-probable quantization, the results are also IID bits. There is no bias because all the intervals occur with equal probability, so do the bits. So this is a really nice scenario for doing puff key generation and for the remaining noise, I can then use some binary ECC. So once we apply that, we basically map back the problem to the binary scenario and we're done. There's nothing we need to do. Now if we look at the equidistant quantization and we start assigning symbols to the evenly sized intervals, so A, B, C, D, and so on, then of course we have symbols and no longer bits at all. Of course we can start encoding these bits, but at first we start with the symbols because then we have some more options as we'll see on the upcoming slides. These symbols are quite biased as you can tell from the PDF. And of course for the noise, again we need to apply some ECC. So why on earth would we want to use equidistant over equi-probable? But of course there's always a small catch. So if we look at equi-probable quantization in terms of temporary sensitivity, then what we see is that we have differently sized intervals, so the uttermost intervals are rather large. And if we look at one sample from that PDF and we start measuring it, then we usually have some measurement noise. And if we want this to work in a temper evident context, the typical assumption is that the noise is much smaller than the magnitude change induced by the attacker. So in this simple example, the noise is just one bracket and the attacker is five brackets, so I can just shift the value from here to there, but it would still map to the same bit stream. So in these large intervals, I can attack without changing the value, so that kind of defeats the purpose of having a temper evident path. So that's one of the issues. Another issue is the missing selectivity of binary ECC responses with multiple values. So in case of the coating path or other temper evident paths, we might have a couple of capacitors that we use as temper evident path structure. Now we do the quantization and we get some bit sequence and now we look at the reconstruction. So someone was attacking that puff and inducing errors in my puff structure. So case one, we now have one capacitor that was completely destroyed, so all the bits changed. In case two, we have three different capacitors, whereas each position only one bit changed and in case three, we have one position with two bits and the other one with just one bit. Now if we apply just, let's say, the standard ECC binary construction where we correct three errors, then in all these cases, we map back to the case of the enrollment, which of course is good since you do error correction, but at the same time, you also correct the error induced by the physical attack. So that's another issue of this type of approach, which is also the same if you would be using an equidistant quantization. In addition to that, the bit string per capacitor is just three bits and we had eight intervals and as you saw in the figure before, there are some large magnitude errors possible where the hamming distance is just one. So what we really need is, if we look at puff key derivation, that we not only look at reliability and entropy, but also somehow make temporary sensitivity part of that picture. So usually if you look at puffs that are implemented at the IC level, you look at helper data storage, logic area and runtime parameters such as energy efficiency or runtime, and for the security, reliability and entropy, but temporary sensitivity was not part of that. So instead of making puffs small and lightweight, we really need different approach for temporary evident puffs where we make them temporary evident, large and secure. So to do that and to provide a fair comparison, there are two definitions, really easy ones. So there is the maximum magnitude, temporary sensitivity, which is the maximum magnitude that goes undetected. So as someone defending the system, this is the worst case scenario because the attacker is allowed the maximum magnitude without changing the puff outcome. And then we also have the minimum magnitude, temporary sensitivity, which is the minimum magnitude that we detect. So this is kind of the best case, so the earliest shift that we're going to detect. And to allow comparison between different schemes, we need some units that we use to express the magnitude of that shift the attacker induces. And since it's a puff, what is quite useful is to use the measurement noise sigma n. And then the kind of practically best physical security we can achieve this if these two metrics are the same and close to one, which would be equal to the noise of the puff measurements. So now we have different options that we can use. So we start with the raw output, we can go up for the equi-probable quantization, then we have the binary gray-coded string, and then we can apply some ECC overhamming distance, which is called profile five in the next table. Then we have the equidistant quantization, we get some symbols, of course we can map them back to bits of a fixed length, and then this is what we call profile three. Then we have symbols, we can map them to bits of a variable length. And then something, let's say, unusual needs to be done, we need to do the ECC over the Levenstein distance, because you not only have substitution errors, but also insertions and deletions. Of course, we can also use a Q-ary ECC overhamming distance and a Q-ary ECC over-lead distance. So I think you're all familiar with hamming distance. Levenstein distance I just very briefly introduced, and what we will focus on, and what is part of the proposal in the paper, is that we do Q-ary ECC over-lead distance, based on the output of the equidistant quantization. So in the binary case, what you typically see in a puff paper is the binary symmetric channel. So now what we're looking at is the Q-ary channel. So these symbols, zero to Q minus one, and this shows the transition probability to all the others. And we have the solid lines and the dashed line. And now when we look at lead distance, there are two different scenarios that we need to consider it. So if we just look at the solid lines without the dashed line, then this is what we call the non-reparant channel, because from Q minus one back to zero, we have the distance of Q minus one, whereas when we have the dashed line, it's called the rep-around channel, and then the distance between Q minus one and zero is just one. So in case of the temper evidence scenario, we definitely want to use the non-reparant channel, which is also then called Manhattan distance. So just to be clear, when I say lead distance or Manhattan distance, I generally refer to the non-reparant channel. So there are different limited magnitude types. So one is called the asymmetric case. So green is our designated value, and red is the error magnitude. And here we can be really selective. So in the asymmetric case, we only correct errors to the right. In the symmetric case, we correct errors to the left and right of equal magnitude. And then we also have the bidirectional case where these two magnitudes in opposite direction can be even of different magnitude. Now if we look at the PDF of our path and we have some symbol here, then of course, if we look at LD and LU, we just look basically at the neighboring intervals if LU and LD is one. So that's a very narrow frame that we correct here, and that is what we want. So now these are the results that we got from our comparison. I'll walk you through, no worries. So we use the coding path parameters and then we have six profiles. The first profile is just based on the equidistant quantization. So we set a certain width for the quantization interval. Then we have a number of intervals. Then these are all the blocks that we need to process. We don't have an ACC. And then the entropy we get out is 267 bits. And now the newly defined metric for temp sensitivity per node. So that's just one capacitor. In this case is the width defined for the quantization and for the whole device, we just sum up all the magnitude that we are allowed to do without being detected, which in this case is 692 and the unit is sigma n. And the distance metric is none. So for profile two, what we use was the fuzzy commitment based path ECC using an RS code. So in that case, the entropy is much lower. The temp sensitivity is much less. So the numbers here are much higher and we used having distance over symbols. So the distance between the symbol A and the symbol D would be one, same as the distance between A and B. Then we have the binary case. So now we map back the symbols to the binary scenario. We use a BCH code correspondingly. We get much more bits out. So this is based on a code offset construction, but still the temp sensitivity is much less compared to the quantization approach alone, which is a little bit surprising. Then we have profile four. Here we use the VT codes to deal with the insertion and deletions. So bit strings of variable length. We get many bits out. The temp sensitivity per node is still not as good as just using the quantization. And it's almost equal for the whole device. Now profile one is based on the equiprobable quantization in contrast to all the previous ones. And in that case, we get quite many bits, but the temp sensitivity is still not as good. And now when we look at the proposal that we made, this is profile six. So this is equidistant quantization with the LMC codes. Then we get many bits out. The temp sensitivity per node is almost equal to the quantization scenario alone, but on a device level, we are much better now. So the takeaway message is that tamper-evident paths are important to achieve the highest physical security in a device. So just using a path is not enough. A physical design and the key duration must both be optimized towards temp sensitivity. So of course, on a conceptual level, you need to optimize, but of course, you also need to do the practical fact-checking and the attacks to really confirm that it's tamper-evident. We formalize the temper sensitivity to better assess the various key derivation options. So I think there are even better ways to do it, so this is just the start. We proposed this new scheme to overcome the previous limitations, and we also provide definitions for uniqueness and reliability based on the Lee and Manhattan metric. So if you start, create your own tamper-evident path, then you also need to look at how to assess the uniqueness and reliability, and for that, you can use the Lee and Manhattan metric also. You have four responses based on symbols or higher-order alphabet paths, which is kind of the same thing. The question here is, what are the benefits when applying this concept to regular paths? So it's just a matter of constructing them, and then we can also apply these same concepts, and also in case of strong paths, we could also apply these concepts, which is a little bit outside the scope of previous work. And for this particular topic, one option that I see is to investigate better quantization options. So with that, I would like to conclude my talk. Please see my updated contact information, and thank you very much for the attention. Thanks for the talk, Vincent. Is there any question? Gene? Hi, Vincent. Hi, Jim. Nice presentation. So this is, I'm trying to get my head wrapped around what you're doing. This is only applicable to like analog voltage types of pups, like the capacitor that you were showing. I mean, I'm trying to place this into like a delay-based puff of some sort. I mean, is tamper-evident is still relevant, right? Is tamper-evidence relevant for all types of pups that are out there, or only these specific types? Well, I think first of all, it depends on the scope what you define as your path. So of course, if your path is just one small module in your sock, it can't be tamper-evident. Of course, if you drill a hole through the path, it will be gone. But if you probe the data bus somewhere else, how's the puff supposed to protect against that? So, yeah, if you aim for a tamper-evident puff, you need to enclose your circuit somehow. And then what you measure, well, I think the capacitance is just one example, but if you come up with some other physical structure that you can measure using an ADC, let's say, and you get this type of PDF-based response, then of course, you can apply the same concept, yes. Right, so for the case that you just gave where you have a bus, right, and you probe the bus, right, it's digitized at that point. So you're assuming that tamper can flip the bit. Is that what you're assuming? Sorry, I did not get what you were doing. Well, I'm just, you know, I'm trying to figure out what tamper means, right? Ah, okay, so, yes, so tampering in that case is specifically invasive probing. So if we look at, let's say, electro-optical probing, I think this might be more like a side-channel attack because you observe the bits and photons and everything and yeah, I think that's a different direction. Okay, all right, thanks. Thank you. We have time for another question. So if there is no question, I would like to ask you a question. So thanks, Vincent, for the nice talk. Do you think, is there any theoretical way to distinguish between the environmental noise and the tampering which probably is coming from that adversary? I think that's very, very difficult because we're talking about probabilities here. So if we do error correction, it's all about probabilities. So if there is some error, the probability for that might be, let's say, 0.0001%. And it could be noise, but it could also be tampering. And of course, in case of tamper-evident puffs, we always need to favor the security. So we deliberately need to make the device fail to ensure security. So, yeah, we need to have a large safety margin because right now I don't see any way to distinguish these two, unfortunately. Thank you. So that's the time to speak here again.