 OK, please welcome Paul Van Albel, a PhD student at Rabaud University in Nijmijen, and he's going to give a talk of physically unclonable functions. A warm round of applause, please. Thank you. Thank you for having me. Thank you for having me on primetime when everybody is finally awake but not yet drunk. And thank you for letting me compete with the space track. So, well, my health doesn't explain who I am, but the work in this talk is actually mostly not from me. It's by many, many authors, and there will be citations on almost every slide. Don't pay attention to those. It was simply too hard for me to make two different set of slides. Download the slides afterwards. If something interests you, the entire intent of this talk is to get you interested in this material, get you reading the papers, and get you implementing this stuff yourself. So, without further ado, and without any further eco-centric bladering, let's look at the problem we're trying to solve. In computer security, since the 1980s, we've noticed that we might want unique identification and authentication of devices, and then specifically integrated circuits. So, we want to distinguish chips uniquely from the same manufacturing masks even, and with high accuracy, unfortunately. Simple task, right? So, in order to explain how we get to physically unclonable functions, I'm first going to explain some history in anti-counterfitting. And anti-counterfitting, you can think of money. You can think of Magstripe cards. You can think of identity documents and nuke counters, or as they are commonly called in literature, treaty-limited item identifiers. So, let's start with money. Historically, money has been protected with highly intricate imagery. This is an example from right after the US Revolution, and I personally really liked, let's see, the two counterfeit is death. Because, you know, while it was a crime against the state, you were drawn and quartered when you did it. Then, we fast-forward a few centuries, and I would like to know from the audience who has ever seen this. Quite a lot. Can anybody tell me what it is? The Eurion Constellation. It's intended to prevent photocopiers from copying your money. So, basically, when the photocopier detects this thing, it will just say, I don't want to copy. You can actually use this on your own stuff, if you want. But we see a common theme in those entire few centuries. Namely, you mark all valid bills the same, and then you make sure that you can check the marks in order to identify that legitimate. An alternative to this would be to have different marks for each bill and then sign that marking. But you get into a whole bunch of questions like, how do I then prevent somebody from copying that bill-specific valid mark 100,000 times and just copying the signature as well. It's not as though anybody is checking paper money online. So, back in 1983, Bada proposed an anti-counter-fitting measure which basically meant you sprinkle random length cuts of optical fibers into your paper before it becomes paper. And then you make the money and you use basically a light bar scan. So, whatever photocopier does as well. And then there will be a dot pattern that appears around the light bar. And you extract that dot pattern, you make that into a series of bits, and you sign that dot pattern. And then you print the signature onto the bill. Now, there's several problems with this which are all explained in those papers. I don't have the time to go into that. But in principle, this works. Then next, cards. You know, magnetic stripes and pin, the way we used to use them in Europe. I think you still use them in the U.S., I'm not sure. But because nobody knows how to copy magstripes, right? So, you add stuff to the card so that it becomes detectable when somebody has copied the card onto a forgery. So, do you use holograms? As far as I know, holograms are also copyable now. I don't have the literature reference there, but stuff can be done. Now, somebody in 1980 already proposed this. You randomly disperse magnetic fibers in a coating. You scan those fibers with a, well, electromagnetic sensing device and turn them into pulses and pulses with clock, et cetera. Turn them into bits again, sign that pattern, et cetera. Then there's also this nice proposal where you randomly disperse conductive particles in an insulating material scan with a microwave. It's basically the same principle from also the 1980s. Next, identity documents. Somebody proposed using the translucency of a paper strip in an identity document, scan that strip, turn the translucency pattern into a bit mask, sign a bit mask, et cetera. Now, Simmons already said that this was too easily cloneable because you can just take a photograph of this and reproduce it through photographic techniques. So translucency isn't really nice. Now, you could also potentially use the exact three-dimensional cotton fiber pattern of the paper. But that proposal was also in 1991. And Simmons also said this is infeasible to do. However, in 1999, somebody came up with something similar. They take the texture hash of a postal envelope, so you just print a square on the envelope, take a higher resolution picture of that, and then hash that with a certain hashing code that ensures that all these things collapse into the same bit pattern every time. This works. Then finally, those three-dimensional items, the reflective particle tags. You basically affix such attack to the surface of a treat-limit item. Then you cure them with ultraviolet light so that you turn it into a gloss-like substance, which makes it temper evident if I try to take it off the gloss breaks. And it also preserves the particle orientation. And then you put laser onto it, you look at the reflective pattern, and you have your identifier. So if you ever have a bunch of nukes to count, sometimes that might be interesting. The common theme here is that we are using an intrinsic aspect of an item that's infeasible to copy, but easily readable, it's unpredictable, and it should ideally be unchanging, which brings us to a proposal in 2001 of physical one-way functions. Basically, the idea was you have an epoxy with miniscule gloss spheres. You cure the epoxy, you make it into a 10 by 10 by 2.5 mm, I don't know exact dimensions anymore. I say sphere, I mean, what's it called? Cube? Cuboid? Something like that. And then you illuminate it by laser. And then you get a speckle pattern out of that, because laser will disperse in a really unpredictable pattern, and you capture that at 320 by 240 pixels. You turn that into a 2400-bit key with a so-called Gabo transform. I have no idea how the math behind that works, because that's not my field of expertise. And you get interesting properties, like drilling a hole here causes half the bits to flip so it's temper resistant, it mirrors the way one-way functions work, like SHA-1 and SHA-256. Ideally, if you flip one bit in your input half, your outputs a bit should flip. So this paper is really the first paper that proposed this as a connection with cryptography. So here, reading the structure is feasible, because you have this gloss pattern, you can just, well, I said just, you can use microscopic techniques to read it out exactly, but good luck with having this sub-micron accuracy for all those glass spheres in the epoxy. So you can, in theory, if you know the structure, emulate or simulate how a laser passes through this, but it requires a lot of computational power, and in order to, you also can't build a database of responses to challenges, because imagine that the challenge to this structure is a laser at different orientations, like I can say laser under an angle of 5 degrees, or 10 degrees, or 20 degrees, and at different locations, and all those responses will be different. So this challenge response page is infeasibly huge. So the protocol here would be first, you read this thing on a trusted terminal, and you create a random collection of challenge response pairs. Your challenges have to be kept secret, because next you get an authentication request from an untrusted terminal, and you challenge that terminal. And the idea would be that it's infeasible to send the correct response key if you don't have the device containing this path. Well, this physical one-way function. So you then receive the response key, and you reject this if the key differs by too many bits, because it won't be a perfect match. There might be scratches, there might be slight micron differences in the orientations, it might be a bad camera. You get some differences, and the way you then do this is you calculate the least probable acceptance rate of a counterfeit device that you want, and then you get to this amount of bits. And then you can get a better match rate if you repeat steps four through six a few times, and if you run out of challenge pairs, you could just go back to one. That's the general idea. So this is the first paper that made this connection with cryptography. It has a defined protocol, but there are several not-so-nice things like you have special equipment required, and we would really like to have the same possibility in silicon and silicon only. Now, in this paper already, the proposal was that you might be able to have a similar approach if you scatter electrons. I don't understand what this says, but I know that is not what we're going to see next. So as an aside, if you do this kind of thing, then you get to read very old papers. So wasn't it nice, back when you could say this, in the fuel rod placement monitor, high radiation levels in the hot cell provided the general temperature resistance, or the seismic sensors would detect any attempt to gain physical access to the package long before the information security is in jeopardy. Now, I wouldn't actually take that one as a bet, because I know you guys, but the first one is pretty good. And you get to see things like this. This is how RSA was done in 1984. I think that's an ESA, maybe pre-Bus, I don't know. So this is how that was done, and the text is really beautiful. They scanned an old, basically typed that on a typing machine paper. This is available online, by the way, if you have university access, sorry. Then there are other solutions to this problem. Of course, you have hardware security models, you have smart cards, you have trusted platform modules. Actually, I found out we only have those since 2006. I thought they were older, but you still have the problem of key management, right? Because the key isn't tied to the platform, if I can extract the key and put it into another trusted platform module or another hardware security module, then we're still then water. So the aspects of these things is the key never leaves the device, ideally. But then how does the key enter the device? You can enter new keys, you can enter key encrypting keys to decrypt keys that you never see and then another hardware security module exports. It's all interesting crypto, but you also get the problem of what can the key do? Are you limited to 1,024-bit RSA? Is it possible to emulate all this once you have the key? We really want to have other aspects to our function. Now, this is the first name for Puffs. Silicon physical random functions, but they already knew that PRF might have some three-letter acronym clashes with pseudo-random functions so they decided to go for physical, unclonable functions. There's an interesting discussion going on whether it should be physical or physically. I'm not going into that. So the idea is, temporary resistance in general is expensive, is difficult. It's just, let's look at a different approach. There is enough process variation across identical integrated circuits where they're not identical because of those process variations. Already in 2000, somebody made, Losson and Dyson Taylor had a small paper on specific special device identification circuits. But if you want to use those for secure device identification and authentication, then just a single circuit is not enough. You need more. So what do you do? You build this. I don't think it's really visible, but basically this is the entire circuit. You have a delay circuit here. This is a ring oscillator puff. So you have a delay circuit here. This is a self-oscillating loop. Basically, this feeds back into this. And the challenge here is a bit for each of these blocks. And what the bit says, if it's one, then you pass through. If it's zero, you pass over. So if you have a different challenge, you have a different path through this path. So ideally, for each challenge, it should be unpredictable whether this final orbital block here, somewhere over there, gives a one or a zero, and then you count the pulses and you identify your circuits. Now, attacks on this were also quite well studied as possible attacks. So you have the duplication attack, which is basically cloning, which should be impossible. That's the general. Your cloning should be impossible. There is emulation from measuring. So you build a model from this by measuring the exact distances between logical units inside a puff or the length of the wires inside a puff. Also deemed infeasible, because how are you going to measure this without destroying the puff? This is back in 2001. Then there was emulation from modeling. So basically, if you get these challenge response pairs, if you get enough of them, you can apply some nice machine learning algorithms to that and then you get prediction of responses. And finally, you have the control algorithm attack, which is attacking the puffs control algorithm without ever getting into the puff. If you can do that, then your puff is useless. So they also proposed controlled physically unclonable functions, which is the same but with bells on. So you have an access function for the puff, which is part of the puff. This is to prevent against that final attack. So basically, where you overlay the logic of the access function with the puff, so that to access the logic of the access function, you have to break the puff. And if you break the puff, everything breaks, no longer works. So this gives additional properties. An uncontrolled puff can only be used for device authentication. This can be used to have nice things like proof of execution on a specific device. Potentially things that I don't have an opinion on, on code that only runs on specific devices, but basically whatever you need a secure cryptographic key for, you should really be using a controlled puff, is the idea. But you can still do device identification. So how does a controlled puff look? You have a random hash here, you have a potential ID here. You have the puff here. Challenge ID, personality into the random hash. You run it through the puff, do some error correction because puffs are not ideal, and then the random hash again, and then the response. This is to prevent all these attacks. If you're interested in this, read the paper. Then in 2011, a formal model was proposed. What do we really need from puffs? First, we need robustness. Across evaluations, we need the same response. We need physical unclonability. It really shouldn't be possible to clone these things. And we need unpredictability. Now, these two are potentially a lot. So we'll get into that final slide, I think. And since then, there have been, since 2011, there have been a lot of proposals and attacks on puffs. So first, there are the orbital puffs, which are all delay-based. So the general idea here is that if you run a signal through a chip, it's delayed by a certain amount. But the amount is unique per chip. But it turns out that you can pretty easily model this. And even the bistable ring puff, which is fairly recent, I think, you can do some fancy machine learning. I highly recommend this paper, Puck Learning of Arbiter Puffs. Basically, the idea is you have 30,000 challenge response pairs, and that's enough to give you 100% accuracy on a 256-bit challenge puff. That's not good. This doesn't really work if you can model it that way. And you can also use optical measuring of signal through devices at 6-peak or second accuracy. So these things might not be around for much longer. Then there are memory-based puffs. They are based on bistable memory, which basically looks like this. And it's also delay-based, but here it's unique to this cell. You have a block of these cells. They are all independent, so you get a pattern out of this. These cells go to 1 or 0, and they are pretty fairly stable in doing this. I'll show you a picture later of what happens if you have a nice puff of this type and if you don't have a nice puff of this type. However, if you have an SRAM puff, for instance, you have fairly limited SRAM. So you can just, in principle, read all this out and store all the bits in a database, and then you can clone the puff because you can use focused ion beams to trim the SRAM of another chip into the correct orientation. And, well, emulation, if you have this database, you can just respond from your database. So this is, in some literature, termed a weak puff, but it's probably still the most useful one we have right now. This is usually also what's in your devices if it's claimed to have a physically-unclonable function, but they are of the controlled variety most of the time. Then, finally, recently, somebody proposed, so I think that was, yeah, Shalak Xiong and Anagnos Tupas, Chemopteron something. But the decay-based puffs, the idea is, you have DRAM, take the power off, put the power back on, look how it decayed. No attacks on that that I have seen yet. So the final few minutes of this talk will be about your very own memory puffs, which is trivial, right? No, it's not, actually. And all this time before, you might think, why would we even bother with this? It seems to be hopeless for puffs. There is not enough randomness in silicon, but I disagree. Because, for one, some protection is better than none, which is what most system-on-chip devices have. And two, I do not believe in silver bullets. This should be part of a greater security mechanism. So if nothing else, if all you want from this talk is some interesting paper to read, just one, read this one. That's on slide 39. It's called lightweight anti-counterfeiting solution for low-end commodity hardware using inherent puffs. And preferably you also read this related one, puff-based software protection for low-end embedded devices. Don't be fooled by the terms IP protection and the license model. This is a secure boot environment. You want it in your Raspberry Pi, for instance. I don't know whether Raspberry Pi's have it. That's for you to find out. What you'll need is a device with a must-rom to hold the bootloader. The first stage of code needs to be under your control. You need to have that modifiable startup code. You need to be able to modify it, obviously. And you need onboard SRAM to build the puff-on. And then you need some non-volatile memory for encrypted firmware and helper data. So, in the puff-on project, which that earlier paper was a result of, so there are several results here. This is an SCM32F100B microphone. This is a panda board, which is pretty much like a mobile phone, actually. So what you want to see is this white noise. This part is a puff-like memory range. This part is probably spoiled by the bootloader or something like that, or the ROM code. But this you can use. This looks good. So, once you have such a white noise area, you start measuring a lot of times, and then you compute the hamming distance between lots of measurements from lots of different devices. And you want it to look like this. You want it to be around half, because that means that every device will look different by about 50%. You also measure the inner-class hamming distance, which is same measurements from the same puff, and you want that to be below 0.1. You don't want that to be too inaccurate, because then your error correction becomes too complex and starts leaking information. And you will need error correction using, for example, go-like codes. So this first paper I mentioned, this one, lightweight and conforming one. This is also from that paper. Read it. It also explains how this fuzzy extractor works. If you're interested in this, there's lots of scientific literature out there. And then finally, you build this fuzzy extractor, and then you enroll your chip, and you generate some helper data for this error correction. And then once you challenge the chip, you send this error correcting data and in the end, the idea would be that you get a secret s-prime from average chip. Now, how can you use this? You have the bootloader in the master room. This is the first-stage bootloader. It challenges the puff and decrypts the second-stage bootloader, which comes from external memory, and then you boot the embedded operating system. So this should look familiar to a lot of you because this is basically also how device attestation on X86 works if you're using trusted platform modules. So in a bit more detail, same procedure, query the puff, decrypt and call. Here, the key also ends up and you decrypt and call the kernel. And then finally, this is how it really looks in real detail. And even if you don't want to build this, you'll still have this. So remember when I showed you the inner-class Hamming distance, the 10% of differences between measurements that's caused by the red dots. Those are the unstable SRAM cells. You can use those as seeds for a random function. And hopefully you won't have this. This looks wrong. This is not a puff. This is too predictable. Unfortunately, all this won't be possible on X86 because we looked for the puffs in the CPUs, but Intel and AMD both explicitly zero everything. Finally, a word on privacy. I don't have too much time for this, but I really liked the fact that they mentioned they feel that users feel that they can be tracked if you have a unique identifier, as though it's not a valid concern than the users being paranoid. Now, back to the control puff. You can add personality IDs as a user. If you challenge it, you add a personality. So one application reading the puff gets a different ID from another application, which changes the entire output of the hash function. No paranoia required anymore, hopefully. Finally, the references. Google Scholar is your friend. The rest of the slides are all kinds of references. Read it. You've already seen all of those. Read it. Thank you for your attention. Thank you, Paul. We have time for maybe two questions. Please come up to the mics. Mike, three. What do you think about memspaced physical and clonal functions where they basically use accelerometer sensors and deviations and these sensors by inducing challenges as controlled vibration? Sorry, I missed the first word of your question. Memspaced, basically the technology that is being used to build accelerometers in silicon. So Bosch has some puff chips based on that, where they have arrays of these memspaced chips and then a controlled vibrator to induce the challenge into that. I think they're probably more secure than silicon-based puffs because they are built for randomness, whereas we're here trying to extract randomness from an existing circuit. Yeah, they're interesting. Use them if you can, but most people don't have the option. Thank you. Any more questions? Up there. Okay, Mike, seven. Hi, thanks for your talk. I'd never heard of puffs. I recently went on a quest to find a usable smart card that met all the things I wanted to do like open source, et cetera. Can you expand a bit on how puffs could be used with an open-PGP smart card or similar? Short answer, no. I have no idea whether open-PGP will ever support anything like this. You have to pick ACS protocols. I know that in theory this is possible. I don't know whether anything has implemented it. There are puffs on smart cards, but we haven't looked into this. I don't know of anyone who has. But that doesn't mean it doesn't exist. That would be all. Please give it up for Paul one more time.